Skip to content
This repository has been archived by the owner on Dec 14, 2023. It is now read-only.

Lora inference problem #78

Open
zacaikido opened this issue Jun 24, 2023 · 7 comments
Open

Lora inference problem #78

zacaikido opened this issue Jun 24, 2023 · 7 comments
Labels
bug Something isn't working

Comments

@zacaikido
Copy link

When trying to run inference using --lora_path parameter, getting :

LoRA rank 64 is too large. setting to: 4
list index out of range
Couldn't inject LoRA's due to an error.

0%|          | 0/50 [00:00<?, ?it/s]
0%|          | 0/50 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/content/drive/MyDrive/Text-To-Video-Finetuning/inference.py", line 194, in <module>
videos = inference(**args)
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/drive/MyDrive/Text-To-Video-Finetuning/inference.py", line 141, in inference
videos = pipeline(
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth.py", line 646, in __call__
noise_pred = self.unet(
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/drive/MyDrive/Text-To-Video-Finetuning/models/unet_3d_condition.py", line 399, in forward
emb = self.time_embedding(t_emb, timestep_cond)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/diffusers/models/embeddings.py", line 192, in forward
sample = self.linear_1(sample)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/drive/MyDrive/Text-To-Video-Finetuning/utils/lora.py", line 60, in forward
+ self.dropout(self.lora_up(self.selector(self.lora_down(input))))
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (6x320 and 1280x16)

I'm running it on a Colab

@ExponentialML ExponentialML added the bug Something isn't working label Jun 25, 2023
@ExponentialML
Copy link
Owner

Hello. Is this a LoRA trained using this finetuning repository or are you using another one?

@ChawDoe
Copy link

ChawDoe commented Jul 1, 2023

I had the same problem using the latest version of this repo.

@JCBrouwer
Copy link
Contributor

If either of you @ChawDoe @zacaikido could provide a LoRa checkpoint, the settings it was trained with, and the settings you're running inference with, I can take a look.

@zacaikido
Copy link
Author

Pointing to a folder with only the xxx_text_encoder.pt in it will solve the problem.
Seems to be a problem with the xxx_unet.pt or multiple files situation?

@howardgriffin
Copy link

Same promblem.
'mat1 and mat2 shapes cannot be multiplied (24576x256 and 512x512)'

@fairleehu
Copy link

me too, RuntimeError: mat1 and mat2 shapes cannot be multiplied (12288x256 and 512x512)

@zideliu
Copy link
Contributor

zideliu commented Sep 6, 2023

When trying to run inference using --lora_path parameter, getting :

LoRA rank 64 is too large. setting to: 4
list index out of range
Couldn't inject LoRA's due to an error.

0%|          | 0/50 [00:00<?, ?it/s]
0%|          | 0/50 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/content/drive/MyDrive/Text-To-Video-Finetuning/inference.py", line 194, in <module>
videos = inference(**args)
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/drive/MyDrive/Text-To-Video-Finetuning/inference.py", line 141, in inference
videos = pipeline(
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth.py", line 646, in __call__
noise_pred = self.unet(
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/drive/MyDrive/Text-To-Video-Finetuning/models/unet_3d_condition.py", line 399, in forward
emb = self.time_embedding(t_emb, timestep_cond)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/diffusers/models/embeddings.py", line 192, in forward
sample = self.linear_1(sample)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/drive/MyDrive/Text-To-Video-Finetuning/utils/lora.py", line 60, in forward
+ self.dropout(self.lora_up(self.selector(self.lora_down(input))))
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (6x320 and 1280x16)

I'm running it on a Colab

in /utils/lora.pyhere,
image

And here
image

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

7 participants