Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWQ alongside sparsegpt #36

Open
Returnvoidspec opened this issue May 13, 2024 · 0 comments
Open

AWQ alongside sparsegpt #36

Returnvoidspec opened this issue May 13, 2024 · 0 comments

Comments

@Returnvoidspec
Copy link

Hi, wondering if you think that it would be possible to use AWQ with sparsegpt ? i tried to make an AWQ model work with sparsegpt by finding the awq.modules.linear.gemv.WQLinear_GEMV layer of the model but still block on the add batch part with this error.

Traceback (most recent call last):
File "C:\Users\mjarnier\travail2\sparse\sparsegpt-master\opt.py", line 320, in
opt_sequential(model, dataloader, DEV)
File "C:\Users\mjarnier\AppData\Local\anaconda3\envs\nouvel_env\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mjarnier\travail2\sparse\sparsegpt-master\opt.py", line 101, in opt_sequential
outs[j] = layer(inps[j].unsqueeze(0), attention_mask=attention_mask)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mjarnier\AppData\Local\anaconda3\envs\nouvel_env\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mjarnier\AppData\Local\anaconda3\envs\nouvel_env\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mjarnier\AppData\Local\anaconda3\envs\nouvel_env\Lib\site-packages\transformers\models\opt\modeling_opt.py", line 552, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
^^^^^^^^^^^^^^^
File "C:\Users\mjarnier\AppData\Local\anaconda3\envs\nouvel_env\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mjarnier\AppData\Local\anaconda3\envs\nouvel_env\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mjarnier\AppData\Local\anaconda3\envs\nouvel_env\Lib\site-packages\transformers\models\opt\modeling_opt.py", line 182, in forward
query_states = self.q_proj(hidden_states) * self.scaling
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mjarnier\AppData\Local\anaconda3\envs\nouvel_env\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mjarnier\AppData\Local\anaconda3\envs\nouvel_env\Lib\site-packages\torch\nn\modules\module.py", line 1574, in _call_impl
hook_result = hook(self, args, result)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mjarnier\travail2\sparse\sparsegpt-master\opt.py", line 95, in tmp
gpts[name].add_batch(inp[0].data, out.data)
File "C:\Users\mjarnier\travail2\sparse\sparsegpt-master\sparsegpt.py", line 52, in add_batch
self.H += inp.matmul(inp.t())
RuntimeError: The size of tensor a (96) must match the size of tensor b (768) at non-singleton dimension 1

just want to know if its possible, or if someone has an idea?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant