Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: The size of tensor a (2) must match the size of tensor b (77) at non-singleton dimension 2 #3

Open
CrazyBoyM opened this issue Sep 22, 2022 · 0 comments

Comments

@CrazyBoyM
Copy link

When I try convert the waifu diffusion model to onnx, it assert with:

(ldm) PS J:\myProject\AI\画图\gaintmodels> python .\export_df_onnx.py
C:\Users\tsing\miniconda3\envs\ldm\lib\site-packages\torch\onnx\utils.py:359: UserWarning: Model has no forward function
  warnings.warn("Model has no forward function")
C:\Users\tsing\miniconda3\envs\ldm\lib\site-packages\torch\onnx\utils.py:1329: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input init_image
  warnings.warn("No names were found for specified dynamic axes of provided input."
C:\Users\tsing\miniconda3\envs\ldm\lib\site-packages\torch\nn\functional.py:2498: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').     
  _verify_batch_size([input.size(0) * input.size(1) // num_groups, num_groups] + list(input.size()[2:]))
C:\Users\tsing\miniconda3\envs\ldm\lib\site-packages\diffusers\models\resnet.py:39: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert x.shape[1] == self.channels
C:\Users\tsing\miniconda3\envs\ldm\lib\site-packages\torch\onnx\utils.py:1329: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input latents
  warnings.warn("No names were found for specified dynamic axes of provided input."
09:28:32 09.22 INFO export_df_onnx.py:106]: vae decoder saved.
J:\myProject\AI\画图\gaintmodels\stablefusion\clip_textmodel.py:35: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
  mask.fill_(torch.tensor(torch.finfo(dtype).min))
C:\Users\tsing\miniconda3\envs\ldm\lib\site-packages\transformers\models\clip\modeling_clip.py:222: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
C:\Users\tsing\miniconda3\envs\ldm\lib\site-packages\transformers\models\clip\modeling_clip.py:230: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if causal_attention_mask.size() != (bsz, 1, tgt_len, src_len):
C:\Users\tsing\miniconda3\envs\ldm\lib\site-packages\transformers\models\clip\modeling_clip.py:262: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
Traceback (most recent call last):
  File ".\export_df_onnx.py", line 130, in <module>
    return forward_call(*input, **kwargs)
  File "C:\Users\tsing\miniconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 1098, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "C:\Users\tsing\miniconda3\envs\ldm\lib\site-packages\transformers\models\clip\modeling_clip.py", line 728, in forward
    return self.text_model(
  File "C:\Users\tsing\miniconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\tsing\miniconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 1098, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "C:\Users\tsing\miniconda3\envs\ldm\lib\site-packages\transformers\models\clip\modeling_clip.py", line 641, in forward
    causal_attention_mask = self._build_causal_attention_mask(bsz, seq_len, hidden_states.dtype).to(
  File "J:\myProject\AI\画图\gaintmodels\stablefusion\clip_textmodel.py", line 37, in _build_causal_attention_mask
    triu_onnx(mask, 1)
  File "J:\myProject\AI\画图\gaintmodels\stablefusion\clip_textmodel.py", line 16, in triu_onnx
    return mask * x
RuntimeError: The size of tensor a (2) must match the size of tensor b (77) at non-singleton dimension 2
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant