Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Old GPU (Quadro K6000) is not used #1899

Closed
LLMuser opened this issue Jul 19, 2024 · 2 comments
Closed

Old GPU (Quadro K6000) is not used #1899

LLMuser opened this issue Jul 19, 2024 · 2 comments
Labels
question Further information is requested

Comments

@LLMuser
Copy link

LLMuser commented Jul 19, 2024

How are you running AnythingLLM?

Docker (local)

What happened?

Running lastest AnythingLLM on Docker in Linux Ubuntu 20.4 Server
Needed to use Lance_revert image though to prevent crashes
NVIDIA Driver Version: 470.256.02
CUDA Version: 11.4

nvidia-smi gives no errors but no processes are found = AnythingLLM is not using the GPU :-(

I know it´s an outdated GPU but maybe there is a workaround?

Are there known steps to reproduce?

docker pull mintplexlabs/anythingllm:lancedb_revert

export STORAGE_LOCATION=/mnt &&
mkdir -p $STORAGE_LOCATION &&
touch "$STORAGE_LOCATION/.env" &&
docker run -d -p 3001:3001
--cap-add SYS_ADMIN
--gpus all
--name anythingllm
-v ${STORAGE_LOCATION}:/app/server/storage
-v ${STORAGE_LOCATION}/.env:/app/server/.env
-e STORAGE_DIR="/app/server/storage"
mintplexlabs/anythingllm:lancedb_revert

@LLMuser LLMuser added the possible bug Bug was reported but is not confirmed or is unable to be replicated. label Jul 19, 2024
@timothycarambat
Copy link
Member

This would be entirely dependent on the LLM provider you are using - are you loading a model manually into /models or using something external like Ollama/LMstudio/LocalAI?

There is really nothing in AnythingLLM that would use the GPU unless you are using the "native" llama-cpp LLM, which is going to be deprecated soon because of these kinds of issues with CUDA detection and needing to rebuild the binary for it

@timothycarambat timothycarambat added question Further information is requested and removed possible bug Bug was reported but is not confirmed or is unable to be replicated. labels Jul 19, 2024
@LLMuser
Copy link
Author

LLMuser commented Jul 22, 2024

Thank you @timothycarambat. I tried both, native and external LLMs. But good to know that if I have an external LLM running, AnythingLLM does not need GPU support...

@timothycarambat timothycarambat closed this as not planned Won't fix, can't repro, duplicate, stale Jul 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants