-
-
Notifications
You must be signed in to change notification settings - Fork 15.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
export Tensorrt model error #11453
Comments
@liquored hello YangBo Zhou, It seems like TensorRT is having trouble initializing CUDA on your machine. This could be due to a variety of reasons, such as an incompatible version of CUDA or not having enough memory available. Can you please try the following steps:
If these steps do not resolve the issue, please let me know and I'll be happy to help you further. Best regards, |
@liquored I encountered the same problem, can you tell me the solution? Looking forward to your reply! |
@xiezhangxiang hi there, I understand that you're facing the same issue with exporting your model using TensorRT. To resolve this problem, you can try the following steps:
Please give these steps a try and let me know if you encounter any further difficulties. Thank you, |
Hi, I followed the instructions (5 steps) above. I downgraded CUDA 11.4 to 11.3 but I still can't get past the following error. `Namespace(calib_batch_size=8, calib_cache='./calibration.cache', calib_input=None, calib_num_images=5000, conf_thres=0.4, end2end=False, engine='models/object-detector/y7_b1.trt', iou_thres=0.5, max_det=100, onnx='models/object-detector/y7_b1.onnx', precision='fp16', v8=False, verbose=False, workspace=1) [09/11/2023-16:37:21] [TRT] [W] Unable to determine GPU memory usage Kindly advise. |
@Capitolhill hi, I'm sorry to hear that you're still encountering the error even after following the steps mentioned earlier. Here are a few additional troubleshooting steps you can try:
If the issue persists, please provide more details about your system configuration (GPU model, OS version, etc.) so we can further assist you. Regards, |
@Capitolhill hi, |
@liquored hi, I apologize for the delayed response. It appears that the issue you're facing is related to a problem with your CUDA installation. Based on the experiences of others, reinstalling the system and CUDA has resolved similar issues. It's worth checking if your CUDA installation is functioning correctly. I hope this information is helpful to you. If you have any further questions or concerns, please don't hesitate to ask. Regards, |
My CUDA driver incompatibility was indeed the problem. Thanks @glenn-jocher and @liquored for your kind help! |
@Capitolhill glad to hear that the issue has been resolved! You're welcome, and I'm glad I could assist you. If you have any more questions or need further assistance, feel free to ask. The YOLOv5 community and the Ultralytics team are always here to help. Have a great day! |
Search before asking
Question
I want to export my model as tensort. But when I use export.py, I was been told as:
TensorRT: starting export with TensorRT 8.6.0...
[04/28/2023-16:38:59] [TRT] [W] Unable to determine GPU memory usage
[04/28/2023-16:38:59] [TRT] [W] Unable to determine GPU memory usage
[04/28/2023-16:38:59] [TRT] [I] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 3351, GPU 0 (MiB)
[04/28/2023-16:38:59] [TRT] [W] CUDA initialization failure with error: 35. Please check your CUDA installation: http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
TensorRT: export failure ❌ 7.7s: pybind11::init(): factory function returned nullptr
Additional
I am sure that I got the driver of Nvidia, because I could finish my train, test and detect.
If you could help me to deal with it, I would appreciate you a lot!
Looking forward to your reply!
Thank you!
Yours YangBo Zhou!
The text was updated successfully, but these errors were encountered: