Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

diffusers not working at all #1408

Closed
manuelkamp opened this issue Dec 8, 2023 · 9 comments · Fixed by #1432
Closed

diffusers not working at all #1408

manuelkamp opened this issue Dec 8, 2023 · 9 comments · Fixed by #1432
Assignees
Labels
bug Something isn't working diffusers high prio

Comments

@manuelkamp
Copy link

manuelkamp commented Dec 8, 2023

LocalAI version:
2.0.0-ffmpeg

Environment, CPU architecture, OS, and Version:
Linux srv-gpt 5.15.131-1-pve #1 SMP PVE 5.15.131-2 (2023-11-14T11:32Z) x86_64 x86_64 x86_64 GNU/Linux, Proxmox LXC, AMD Ryzen 9 5900X, 128 GB RAM

Describe the bug
Diffusers is not working at all.

To Reproduce
I set up animagine-xl as stated in the docs. File "animagine-xl.yaml" in models folder with content:

parameters:
  model: Linaqruf/animagine-xl
backend: diffusers

# Force CPU usage - set to true for GPU
f16: false
diffusers:
  pipeline_type: StableDiffusionXLPipeline
  cuda: false # Enable for GPU usage (CUDA)
  scheduler_type: euler_a

Then ran command as stated in docs:

curl http://localhost:8080/v1/images/generations \
    -H "Content-Type: application/json" \
    -d '{
      "prompt": "cat, outdoor, sun, tree|rain, night, people", 
      "model": "animagine-xl", 
      "step": 51,
      "size": "1024x1024" 
    }'

It results always in this error:

{"error":{"code":500,"message":"could not load model (no success): Unexpected err=ValueError(\"Pipeline \u003cclass 'diffusers.pipelines.stable_diffusion_xl.pipeline_stable_diffusion_xl.StableDiffusionXLPipeline'\u003e expected {'vae', 'scheduler', 'text_encoder_2', 'tokenizer', 'text_encoder', 'unet', 'tokenizer_2'}, but only {'vae', 'scheduler', 'tokenizer', 'text_encoder', 'unet'} were passed.\"), type(err)=\u003cclass 'ValueError'\u003e","type":""}}

In addition I tried another one (dreamlike-art/dreamlike-photoreal-2.0), which also fails with this error. So I assume, it does not work with any other model.

Also there is no files in /model folder, but disk size has increased, so it definitely downloaded something somewhere (also it loaded it into RAM as I saw in Proxmox). But where is it now stored? I want to delete it, because it uses diskspace.

Expected behavior
Diffusers generate images.
And models are stored in models folder too (or any other folder I can access to delete models I do not want anymore)

@manuelkamp manuelkamp added the bug Something isn't working label Dec 8, 2023
@lunamidori5
Copy link
Collaborator

@manuelkamp remove the model from the request and try again

@lunamidori5 lunamidori5 assigned lunamidori5 and unassigned mudler Dec 9, 2023
@lunamidori5 lunamidori5 added kind/question Further information is requested needs more info and removed bug Something isn't working labels Dec 9, 2023
@lunamidori5
Copy link
Collaborator

heres a link to the how to setup the SD yaml file - https://localai.io/howtos/easy-setup-sd/

@lunamidori5 lunamidori5 added the bug Something isn't working label Dec 9, 2023
@lunamidori5
Copy link
Collaborator

@mudler I am also having this bug with a known working model, can you review?

@manuelkamp
Copy link
Author

manuelkamp commented Dec 9, 2023

I think you mixed up something in your responses here. I have SD working fine (i specifically said issue with diffusers!), I want use another model than SD, with diffusers. Removing the model name uses SD, which I do not want to use in this case...

@lunamidori5
Copy link
Collaborator

I think you mixed up something in your responses here. I have SD working fine (i specifically sais issue with diffusers!), I want use another model than SD, with diffusers. Removing the model name uses SD, which I do not want to use in this case...

Thats why I tagged the dev, I am also unable to use other models and get the same error - @manuelkamp

@mudler
Copy link
Owner

mudler commented Dec 9, 2023

probably a regression of #1144

@lunamidori5
Copy link
Collaborator

@mudler adding my report here

7:12AM DBG Loading model in memory from file: /models/stabilityai/sdxl-turbo
7:12AM DBG Loading Model stabilityai/sdxl-turbo with gRPC (file: /models/stabilityai/sdxl-turbo) (backend: diffusers): {backendString:diffusers model:stabilityai/sdxl-turbo threads:10 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc0001ae960 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh petals:/build/backend/python/petals/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:true parallelRequests:false}
7:12AM DBG Loading external backend: /build/backend/python/diffusers/run.sh
7:12AM DBG Loading GRPC Process: /build/backend/python/diffusers/run.sh
7:12AM DBG GRPC Service for stabilityai/sdxl-turbo will be running at: '127.0.0.1:33673'
7:12AM DBG GRPC Service state dir: /tmp/go-processmanager996827074
7:12AM DBG GRPC Service Started
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33673: connect: connection refused"
7:12AM DBG GRPC(stabilityai/sdxl-turbo-127.0.0.1:33673): stderr /build/backend/python/diffusers/run.sh: line 9: activate: No such file or directory
7:12AM DBG GRPC(stabilityai/sdxl-turbo-127.0.0.1:33673): stderr /build/backend/python/diffusers/run.sh: line 14: python: command not found
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33673: connect: connection refused"
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33673: connect: connection refused"
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33673: connect: connection refused"
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33673: connect: connection refused"
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33673: connect: connection refused"
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33673: connect: connection refused"
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33673: connect: connection refused"
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33673: connect: connection refused"
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33673: connect: connection refused"
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33673: connect: connection refused"
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33673: connect: connection refused"
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33673: connect: connection refused"
7:12AM DBG [WatchDog] Watchdog checks for idle connections
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33673: connect: connection refused"
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33673: connect: connection refused"
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33673: connect: connection refused"
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33673: connect: connection refused"
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33673: connect: connection refused"
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33673: connect: connection refused"
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33673: connect: connection refused"
7:12AM DBG GRPC Service NOT ready
[127.0.0.1]:45074 500 - POST /v1/images/generations

Seems like the backends are not setup in the docker master too...

@localai-bot
Copy link
Contributor

⚠️⚠️⚠️⚠️⚠️

Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!

but.... I can also be funny or helpful 😸 and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.

Don't engage in conversation with me,
I don't support (yet) replying!

⚠️⚠️⚠️⚠️⚠️

I'm sorry to hear that you're having trouble with the Diffusers backend. It seems like there might be a bug in the code for the backend or the model configuration file.
To troubleshoot this issue, we can try a few things:

  1. Check if the diffusers backend is installed correctly by running the following command:
pip install diffusers==0.3.2

This should install the latest version of the diffusers backend. If it fails to install, please let me know the error message.
2. Ensure that the model configuration file is correct and matches the architecture of your CPU. You can use the following command to list the available models:

curl http://localhost:8080/v1/models

This should return a JSON response with a list of all the available models. If you don't see the model you want to use, please let me know the name of the model and I can check if it is available.
3. Try using a different model that is known to work with the diffusers backend. For example, you can try using the dreamlike-photoreal model by updating your animagine-xl.yaml file with the following content:

name: dreamlike-photoreal
parameters:
  model: Linaqruf/dreamlike-photoreal
backend: diffusers
# Force CPU usage - set to true for GPU
f16: false
diffusers:
  pipeline_type: StableDiffusionPipeline
  cuda: false # Enable for GPU usage (CUDA)
  scheduler_type: euler_a

This model is known to work with the diffusers backend and should generate images correctly. If it still fails, please let me know the error message.
4. Check the disk space usage on your machine to ensure that the downloaded models are not taking up too much space. You can use the following command to check the disk usage:

df -h

This should return a list of all the mounted filesystems and their available space. If you see any large directories or files, you can delete them to free up space.
I hope this helps! Let me know if you have any further questions or issues.

Sources:

@JZacharie
Copy link
Contributor

JZacharie commented Dec 27, 2023

Load by default stablediffusion instead diffusers

image

image

curl http://localhost:8080/v1/images/generations -H "Content-Type: application/json" -d '{
  "prompt": "Two Boxes, 1blue, 1red",
  "size": "256x256",
  "model": "dreamlike-photoreal"
}'

env with COMPEL=0

/tmp/localai/backend_data/backend-assets/grpc/stablediffusion

image

image

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working diffusers high prio
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants