Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Supress libcudart.so.XX: cannot open shared object file: No such file or directory #2903

Open
richard-hajek opened this issue Aug 21, 2024 · 3 comments
Labels
enhancement New feature or request need-info Further information from issue author is requested python-bindings gpt4all-bindings Python specific issues

Comments

@richard-hajek
Copy link

Suppress misleading errors

Hey,

I am currently running CUDA 12 and gpt4all CAN find it. However due to the fact that _pyllmodel.py#L64 tries to load many variations of the libcudart.so, it will always print at least some errors.

Is there any way to supress these when some valid library is found?

@richard-hajek richard-hajek added the enhancement New feature or request label Aug 21, 2024
@cebtenzzre cebtenzzre added python-bindings gpt4all-bindings Python specific issues need-info Further information from issue author is requested labels Aug 22, 2024
@cebtenzzre
Copy link
Member

Could you provide some specific steps to reproduce this behavior? GPT4All already suppresses all DLL loading errors in that area of the code.

The common situation in which you will see a CUDA-related error that is not fatal is when you are not using the CUDA backend (device='cuda:...') but GPT4All tries to load the CUDA implementation anyway, which will print warnings to console about libcudart.so.11.0 or similar. These messages will appear if you have only CUDA 12 but are using the latest gpt4all package which requires CUDA 11, or if you do not have the NVIDIA driver (e.g. because you don't actually have an NVIDIA GPU).

@richard-hajek
Copy link
Author

Hey,

I built a reproducible example

https://github.com/richard-hajek/repro-gpt4all/tree/main

Run docker build . --progress plain in this repo. It will reproduce the issue

image

@richard-hajek
Copy link
Author

richard-hajek commented Aug 25, 2024

These messages will appear if you have only CUDA 12 but are using the latest gpt4all package which requires CUDA 11, or if you do not have the NVIDIA driver (e.g. because you don't actually have an NVIDIA GPU).

I have CUDA 12, no CUDA 11. But I'm not even using GPU and furthermore I've stepped in the debugger, it does manage to find the CUDA 12 so file

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request need-info Further information from issue author is requested python-bindings gpt4all-bindings Python specific issues
Projects
None yet
Development

No branches or pull requests

2 participants