Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pin numpy v1 for onnxruntime #1921

Merged
merged 4 commits into from
Jun 27, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 23 additions & 19 deletions .github/workflows/test_offline.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@ name: Offline usage / Python - Test

on:
push:
branches: [ main ]
branches: [main]
pull_request:
branches: [ main ]
branches: [main]

concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
Expand All @@ -15,29 +15,33 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: [3.9]
python-version: [3.8, 3.9]
os: [ubuntu-20.04]

runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v2
- name: Setup Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies for pytorch export
run: |
pip install .[tests,exporters,onnxruntime]
- name: Test with unittest
run: |
HF_HOME=/tmp/ huggingface-cli download hf-internal-testing/tiny-random-gpt2
- name: Checkout code
uses: actions/checkout@v4

HF_HOME=/tmp/ HF_HUB_OFFLINE=1 optimum-cli export onnx --model hf-internal-testing/tiny-random-gpt2 gpt2_onnx --task text-generation
- name: Setup Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}

huggingface-cli download hf-internal-testing/tiny-random-gpt2
- name: Install dependencies for pytorch export
run: |
pip install .[tests,exporters,onnxruntime]

HF_HUB_OFFLINE=1 optimum-cli export onnx --model hf-internal-testing/tiny-random-gpt2 gpt2_onnx --task text-generation
- name: Test with pytest
run: |
HF_HOME=/tmp/ huggingface-cli download hf-internal-testing/tiny-random-gpt2

pytest tests/onnxruntime/test_modeling.py -k "test_load_model_from_hub and not from_hub_onnx" -s -vvvvv
HF_HOME=/tmp/ HF_HUB_OFFLINE=1 optimum-cli export onnx --model hf-internal-testing/tiny-random-gpt2 gpt2_onnx --task text-generation

HF_HUB_OFFLINE=1 pytest tests/onnxruntime/test_modeling.py -k "test_load_model_from_hub and not from_hub_onnx" -s -vvvvv
huggingface-cli download hf-internal-testing/tiny-random-gpt2

HF_HUB_OFFLINE=1 optimum-cli export onnx --model hf-internal-testing/tiny-random-gpt2 gpt2_onnx --task text-generation

pytest tests/onnxruntime/test_modeling.py -k "test_load_model_from_hub and not from_hub_onnx" -s -vvvvv

HF_HUB_OFFLINE=1 pytest tests/onnxruntime/test_modeling.py -k "test_load_model_from_hub and not from_hub_onnx" -s -vvvvv
4 changes: 2 additions & 2 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
"transformers[sentencepiece]>=4.26.0,<4.42.0",
"torch>=1.11",
"packaging",
"numpy",
"numpy<2.0", # transformers requires numpy<2.0 https://github.com/huggingface/transformers/pull/31569
"huggingface_hub>=0.8.0",
"datasets",
]
Expand Down Expand Up @@ -79,10 +79,10 @@
"openvino": "optimum-intel[openvino]>=1.16.0",
"nncf": "optimum-intel[nncf]>=1.16.0",
"neural-compressor": "optimum-intel[neural-compressor]>=1.16.0",
"graphcore": "optimum-graphcore",
"habana": ["optimum-habana", "transformers >= 4.38.0, < 4.39.0"],
"neuron": ["optimum-neuron[neuron]>=0.0.20", "transformers >= 4.36.2, < 4.42.0"],
"neuronx": ["optimum-neuron[neuronx]>=0.0.20", "transformers >= 4.36.2, < 4.42.0"],
"graphcore": "optimum-graphcore",
"furiosa": "optimum-furiosa",
"amd": "optimum-amd",
"dev": TESTS_REQUIRE + QUALITY_REQUIRE,
Expand Down
Loading