-
-
Notifications
You must be signed in to change notification settings - Fork 780
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ImportError: Found an incompatible version of auto-gptq. Found version 0.4.2, but only versions above {AUTOGPTQ_MINIMUM_VERSION} are supported #835
Comments
I'm experiencing same issue. |
Same issue here. I'm using docker image: winglian/axolotl:main-py3.10-cu118-2.0.1 Last pushed: Nov 8, 2023 at 2:55 am Digest:sha256:0da75e481402756cca380756b4493150229320776f20c2e67c751fca69690ada |
There was another issue with similar problem: #817 But neither downgrading peft nor upgrading auto-gptq works, just shows different errors. |
I got around this by reinstalling pytorch and installing peft==0.6.0 |
@ehartford please, provide command and version used to reinstall pytorch |
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia |
@ehartford Didn't help. Getting this now:
|
Completely different error, not cause by this issue |
@ehartford Thanks, any idea how to solve it? Or should I create a separate issue for it? |
I've never seen that error. I'd recommend you try to follow the stack trace and see what code is causing it |
I ran into this error yesterday when I tried manually installing axolotl on a runpod instance with the default pytorch 2.0.1 docker image, and managed to resolve it by using winglian's axolotl docker image (https://runpod.io/gsc?template=v2ickqhz9s&ref=6i7fkpdz). However, I just tried booting up a runpod instance with the axolotl docker image this morning, and unfortunately I'm getting this error again. EDIT: To clairfy, by "this error" I mean the original AutoGPTQ error at the top of this issue, not the subsequent one mentioned further down. |
try to install peft==0.6.0 |
I faced the same issue and reported in Discord. Caseus advised pip uninstall autogptq within the Docker Image. This did work to resolve the issue for me (till the underlying dependency issue is settled in a new Image). If you need autogptq - to quote him : ""try pip uninstalling auto-gptq |
The other issue is, I'm not even trying to use GPTQ-based training, so not sure why this AutoGPTQ issue should even error out the run? I ended up getting the training run to start by using recommended fixes of reinstalling torch and installing peft==0.6.0. |
Same here: fixed with #838 This is one of the problems with having unpinned dependency versions in general. EDIT: Looks like |
I used it yesterday with peft==0.6.0 and auto-gptq==0.4.2 when I hit this and had to drop the optimum version. |
The new peft just dropped this morning and I had to pin optimum yesterday so thinking optimum needs to be pinned. Right now I'm using |
I should clarify that. As far as those two dependency versions working together, that appears to be fine. As far as getting this thing to work with gptq on multiple gpus these days, it's a friggen mess of working out various version issues beyond this one. |
this should help too if it gets merged upstream huggingface/peft#1109 |
#838 has been merged and should resolve this for now. hopefully we can figure out what's wrong with |
Please check that this issue hasn't been reported before.
Expected Behavior
Should work
Current behaviour
gives error as per title
Steps to reproduce
Config yaml
No response
Possible solution
No response
Which Operating Systems are you using?
Python Version
3.10
axolotl branch-commit
main
Acknowledgements
The text was updated successfully, but these errors were encountered: