Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Got error when running infer_onnx_tensorrt example #1333

Closed
andre-zh opened this issue Feb 15, 2023 · 1 comment
Closed

Got error when running infer_onnx_tensorrt example #1333

andre-zh opened this issue Feb 15, 2023 · 1 comment

Comments

@andre-zh
Copy link

Environment

FastDeploy version:
fastdeploy-gpu-python 1.0.3

OS Platform:
Linux cm.bigdata 3.10.0-1160.76.1.el7.x86_64 #1 SMP Wed Aug 10 16:21:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
CentOS Linux release 7.9.2009 (Core)

Hardware:
Nvidia GPU RTX A4000 CUDA 11.2 CUDNN 8.2

Program Language:
Python 3.10

Problem description

When running the infer_onnx_tensorrt example
FastDeploy/examples/runtime/python/infer_onnx_tensorrt.py
Error message occur as follow:
[ERROR] fastdeploy/runtime/backends/tensorrt/trt_backend.cc(256)::InitFromOnnx [ERROR] Error occurs while calling cudaStreamCreate().

@andre-zh
Copy link
Author

andre-zh commented Feb 15, 2023

Solved with reinstalling nvidia driver.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant