Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling cublasCreate(handle) #2417

Closed
ghost opened this issue Mar 10, 2021 · 4 comments
Labels
bug Something isn't working

Comments

@ghost
Copy link

ghost commented Mar 10, 2021

🐛 Bug

When I run your code I get the following error:

RuntimeError: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasCreate(handle)`

My cuda installation is fine since I am able to run another open-source repository using cuda (the yolact repository).

To Reproduce (REQUIRED)

Input:

python detect.py --source /home/muhammadmehdi/PycharmProjects/VIDEOS/INTERIOR_LENGTHY --weights yolov5s.pt --conf 0.25

Output:

Traceback (most recent call last):
  File "detect.py", line 175, in <module>
    detect()
  File "detect.py", line 33, in detect
    model = attempt_load(weights, map_location=device)  # load FP32 model
  File "/home/muhammadmehdi/PycharmProjects/SORT_TRIANGULATION/YOLOV5/models/experimental.py", line 120, in attempt_load
    model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().fuse().eval())  # FP32 model
  File "/home/muhammadmehdi/PycharmProjects/SORT_TRIANGULATION/YOLOV5/models/yolo.py", line 169, in fuse
    m.conv = fuse_conv_and_bn(m.conv, m.bn)  # update conv
  File "/home/muhammadmehdi/PycharmProjects/SORT_TRIANGULATION/YOLOV5/utils/torch_utils.py", line 185, in fuse_conv_and_bn
    fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.size()))
RuntimeError: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasCreate(handle)`

Expected behavior

The code should perform inference on the images as outlined in the description here

Environment

If applicable, add screenshots to help explain your problem.

  • OS: [Ubuntu 20.04]
  • GPU [GeForce GTX 1650, 3914.1875MB]

Additional context

I tried with both conda and pip and both environments encounter the exact same error. My python version is 3.8.5 and torch version is 1.8.0

I also verified that I have the GPU version of pytorch installed by using the following code:

print(torch.cuda.current_device())
print(torch.cuda.device(0))
print(torch.cuda.device_count())
print(torch.cuda.get_device_name(0))
print(torch.cuda.is_available())

And the output I got was:

0
<torch.cuda.device object at 0x7f3664031460>
1
GeForce GTX 1650
True
@ghost ghost added the bug Something isn't working label Mar 10, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Mar 10, 2021

👋 Hello @iAmJuan550, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install run:

$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

glenn-jocher commented Mar 10, 2021

@iAmJuan550 this is an environment issue that may be related to your CUDA/CUDNN or PyTorch installation, and is unrelated to YOLOv5. Please ensure you meet all dependency requirements if you are attempting to run YOLOv5 locally. If in doubt, create a new virtual Python 3.8 environment, clone the latest repo (code changes daily), and pip install -r requirements.txt again. We also highly recommend using one of our verified environments below.

Requirements

Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install run:

$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are passing. These tests evaluate proper operation of basic YOLOv5 functionality, including training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu.

@ghost
Copy link
Author

ghost commented Mar 10, 2021

I fixed this problem by downgrading from torch 1.8.0 to 1.7.0
I also downgraded the torchvision version from 0.9.0 to 0.8.1 (this solves the NMS not available with CUDA backend issue)

@ghost ghost closed this as completed Mar 10, 2021
@vinhtran1
Copy link

I fixed this problem by downgrading from torch 1.8.0 to 1.7.0
I also downgraded the torchvision version from 0.9.0 to 0.8.1 (this solves the NMS not available with CUDA backend issue)

I worked for me, thank you

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants