Skip to content

Commit

Permalink
try #2: pin hf transformers and accelerate to latest release, don't r…
Browse files Browse the repository at this point in the history
…einstall pytorch (#867)

* isolate torch from the requirements.txt

* fix typo for removed line ending

* pin transformers and accelerate to latest releases

* try w auto-gptq==0.5.1

* update README to remove manual peft install

* pin xformers to 0.0.22

* bump flash-attn to 2.3.3

* pin flash attn to exact version
  • Loading branch information
winglian committed Nov 16, 2023
1 parent 3cc67d2 commit 0de1457
Show file tree
Hide file tree
Showing 3 changed files with 6 additions and 8 deletions.
1 change: 1 addition & 0 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,7 @@ jobs:

- name: Install dependencies
run: |
pip3 install --extra-index-url https://download.pytorch.org/whl/cu118 -U torch==2.0.1
pip3 uninstall -y transformers accelerate
pip3 install -U -e .[flash-attn]
pip3 install -r requirements-tests.txt
Expand Down
1 change: 0 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,6 @@ cd axolotl

pip3 install packaging
pip3 install -e '.[flash-attn,deepspeed]'
pip3 install -U git+https://github.com/huggingface/peft.git

# finetune lora
accelerate launch -m axolotl.cli.train examples/openllama-3b/lora.yml
Expand Down
12 changes: 5 additions & 7 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,22 +1,20 @@
--extra-index-url https://download.pytorch.org/whl/cu118
--extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
torch==2.0.1
auto-gptq==0.4.2
auto-gptq==0.5.1
packaging
peft==0.6.0
transformers @ git+https://github.com/huggingface/transformers.git@acc394c4f5e1283c19783581790b3dc3105a3697
transformers==4.35.1
bitsandbytes>=0.41.1
accelerate @ git+https://github.com/huggingface/accelerate@80da9cfb09bb3cc9f1b385cb55d6b90d025a5fd9
accelerate==0.24.1
deepspeed
addict
fire
PyYAML>=6.0
datasets>=2.14.0
flash-attn>=2.3.0
flash-attn==2.3.3
sentencepiece
wandb
einops
xformers>=0.0.22
xformers==0.0.22
optimum==1.13.2
hf_transfer
colorama
Expand Down

0 comments on commit 0de1457

Please sign in to comment.