Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segfaults when loading model with local functions, works fine if model is inlined by ONNX #16170

Closed
BowenBao opened this issue May 30, 2023 · 6 comments · Fixed by #16325
Closed
Labels
converter:dynamo issues related supporting the PyTorch Dynamo exporter dependencies Pull requests that update a dependency file

Comments

@BowenBao
Copy link
Contributor

BowenBao commented May 30, 2023

Describe the issue

As title, here is the example model.

To reproduce

Download model from link above. Install onnx from source (for inliner, this step is not necessary to repro the segfault), onnxruntime from nightly.

pip install --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ ort-nightly==1.16.0.dev20230528001

import onnxruntime

# Segfaults
sess = onnxruntime.InferenceSession("segfault.onnx", providers=["CPUExecutionProvider"])

# Works if model is inlined first.
# Needs onnx main branch for inliner.
import onnx
import onnx.inliner
model_proto = onnx.load("segfault.onnx")
inlined_model_proto = onnx.inliner.inline_local_functions(model_proto)
sess = onnxruntime.InferenceSession(inlined_model_proto.SerializeToString(), providers=["CPUExecutionProvider"])

Urgency

No response

Platform

Linux

OS Version

20.04

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

1.16.0.dev20230528001

ONNX Runtime API

Python

Architecture

X64

Execution Provider

Default CPU

Execution Provider Library Version

No response

@BowenBao
Copy link
Contributor Author

gdb shows this is likely the same issue as onnx/onnx#5212, so should be resolvable by bumping onnx submodule commit to onnx/onnx@213b525

Thread 1 "python" received signal SIGSEGV, Segmentation fault.
onnx::shape_inference::ShapeInferenceImplBase::process (this=this@entry=0x7fffffff9e50, func_proto=..., ctx=...)
    at /home/bowbao/onnxruntime/build/Linux/RelWithDebInfo/_deps/onnx-src/onnx/shape_inference/implementation.cc:591
591           if (type->value_case() == TypeProto::kTensorType && ctx.getInputData(i) != nullptr) {

@snnn snnn added the dependencies Pull requests that update a dependency file label Jun 1, 2023
@snnn
Copy link
Member

snnn commented Jun 1, 2023

@pranavsharma , how should we proceed?
@liqunfu , will ONNX have a patch release for this bug? Or will they cherry-pick the fix to a release branch?

@BowenBao BowenBao added the converter:dynamo issues related supporting the PyTorch Dynamo exporter label Jul 26, 2023
@pranavsharma
Copy link
Contributor

As discussed we've an internal work item to add a cmake option to allow build ORT with a different ONNX Commit.

@BowenBao
Copy link
Contributor Author

BowenBao commented Aug 3, 2023

@pranavsharma thanks for info, is there an eta for this item?

@thiagocrepaldi
Copy link
Contributor

FYI Onnx 1.14.1 is coming by the end of August (onnx/onnx#5468)

@pranavsharma
Copy link
Contributor

@pranavsharma thanks for info, is there an eta for this item?

Check with @snnn

BowenBao added a commit that referenced this issue Aug 10, 2023
### Description
Bump ONNX version to https://github.com/onnx/onnx/tree/rel-1.14.1 to
include a fix for segfault when shape inferencing nested onnx functions.



### Motivation and Context
Resolves #16170
jchen351 pushed a commit that referenced this issue Aug 12, 2023
### Description
Bump ONNX version to https://github.com/onnx/onnx/tree/rel-1.14.1 to
include a fix for segfault when shape inferencing nested onnx functions.



### Motivation and Context
Resolves #16170
kleiti pushed a commit to kleiti/onnxruntime that referenced this issue Mar 22, 2024
### Description
Bump ONNX version to https://github.com/onnx/onnx/tree/rel-1.14.1 to
include a fix for segfault when shape inferencing nested onnx functions.



### Motivation and Context
Resolves microsoft#16170
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
converter:dynamo issues related supporting the PyTorch Dynamo exporter dependencies Pull requests that update a dependency file
Projects
Status: Filed Bugs
Development

Successfully merging a pull request may close this issue.

4 participants