Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: CUDA error: no kernel image is available for execution on the device #50

Open
baiziyuandyufei opened this issue May 31, 2022 · 2 comments

Comments

@baiziyuandyufei
Copy link

when I run

p=Pipeline('auto')

>>> from trankit import Pipeline
2022-05-31 18:01:41.938559: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
>>> from trankit import Pipeline
>>> p = Pipeline('auto')
Loading pretrained XLM-Roberta, this may take a while...
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.8/dist-packages/trankit/pipeline.py", line 85, in __init__
    self._embedding_layers.half()
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 765, in half
    return self._apply(lambda t: t.half() if t.is_floating_point() else t)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 578, in _apply
    module._apply(fn)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 578, in _apply
    module._apply(fn)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 578, in _apply
    module._apply(fn)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 601, in _apply
    param_applied = fn(param)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 765, in <lambda>
    return self._apply(lambda t: t.half() if t.is_floating_point() else t)
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

docker image is nvidia/cuda:11.4.2-cudnn8-runtime-ubuntu18.04

$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243

$ nvidia-smi
NVIDIA-SMI 470.103.01 Driver Version: 470.103.01 CUDA Version: 11.4

$ pip list | grep torch

torch 1.11.0

@kirianguiller
Copy link

Same problem here, with Cuda 11.6 and torch 1.12 (and trankit 1.1.1)

@ulf1
Copy link

ulf1 commented Aug 11, 2022

Same error here

  • trankit.Pipeline(lang="german-hdt", gpu=True, cache_dir="./cache") => RuntimeError: CUDA error: no kernel image is available for execution on the device
  • Debian 11
  • Python 3.9
  • torch.cuda.is_available() returns True

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants