Skip to content

Commit

Permalink
Release (#209)
Browse files Browse the repository at this point in the history
  • Loading branch information
IlyasMoutawwakil committed May 17, 2024
1 parent 14fc8ac commit 0b24af9
Show file tree
Hide file tree
Showing 6 changed files with 25 additions and 22 deletions.
4 changes: 1 addition & 3 deletions .github/workflows/test_cli_misc.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ on:
paths:
- .github/workflows/test_cli_misc.yaml
- "optimum_benchmark/**"
- "docker/**"
- "tests/**"
- "setup.py"
pull_request:
Expand All @@ -17,7 +16,6 @@ on:
paths:
- .github/workflows/test_cli_misc.yaml
- "optimum_benchmark/**"
- "docker/**"
- "tests/**"
- "setup.py"

Expand All @@ -31,7 +29,7 @@ jobs:
fail-fast: false
matrix:
os: [ubuntu-latest]
python: ["3.8", "3.10"]
python: ["3.8", "3.9", "3.10"]

runs-on: ${{ matrix.os }}

Expand Down
3 changes: 1 addition & 2 deletions .github/workflows/update_llm_perf_cuda_pytorch.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,6 @@ jobs:
- name: Run benchmarks
uses: addnab/docker-run-action@v3
env:
IMAGE: ${{ env.IMAGE }}
SUBSET: ${{ matrix.subset }}
MACHINE: ${{ matrix.machine.name }}
HF_TOKEN: ${{ secrets.HF_TOKEN }}
Expand All @@ -49,5 +48,5 @@ jobs:
run: |
pip install packaging && pip install flash-attn einops scipy auto-gptq optimum bitsandbytes autoawq codecarbon
pip install -U transformers huggingface_hub[hf_transfer]
pip install -e .
pip install optimum-benchmark
python llm_perf/update_llm_perf_cuda_pytorch.py
13 changes: 10 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,22 @@
<p align="center"><img src="logo.png" alt="Optimum-Benchmark Logo" width="350" style="max-width: 100%;" /></p>
<p align="center"><img src="https://github.com/raw/huggingface/optimum-benchmark/main/logo.png" alt="Optimum-Benchmark Logo" width="350" style="max-width: 100%;" /></p>
<p align="center"><q>All benchmarks are wrong, some will cost you less than others.</q></p>
<h1 align="center">Optimum-Benchmark 🏋️</h1>

[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/optimum-benchmark)](https://pypi.org/project/optimum-benchmark/)
[![PyPI - Version](https://img.shields.io/pypi/v/optimum-benchmark)](https://pypi.org/project/optimum-benchmark/)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/optimum-benchmark)](https://pypi.org/project/optimum-benchmark/)
[![PyPI - Implementation](https://img.shields.io/pypi/implementation/optimum-benchmark)](https://pypi.org/project/optimum-benchmark/)
[![PyPI - Format](https://img.shields.io/pypi/format/optimum-benchmark)](https://pypi.org/project/optimum-benchmark/)
[![PyPI - License](https://img.shields.io/pypi/l/optimum-benchmark)](https://pypi.org/project/optimum-benchmark/)

Optimum-Benchmark is a unified [multi-backend & multi-device](#backends--devices-) utility for benchmarking [Transformers](https://github.com/huggingface/transformers), [Diffusers](https://github.com/huggingface/diffusers), [PEFT](https://github.com/huggingface/peft), [TIMM](https://github.com/huggingface/pytorch-image-models) and [Optimum](https://github.com/huggingface/optimum) libraries, along with all their supported [optimizations & quantization schemes](#backends--devices-), for [inference & training](#scenarios-), in [distributed & non-distributed settings](#launchers-), in the most correct, efficient and scalable way possible.

*News* 📰

- PyPI package is now available for installation: `pip install optimum-benchmark` 🎉 check it out !
- PyPI package is now available for installation: `pip install optimum-benchmark` 🎉 [check it out](https://pypi.org/project/optimum-benchmark/) !
- Hosted 4 minimal docker images (`cpu`, `cuda`, `rocm`, `cuda-ort`) in [packages](https://github.com/huggingface/optimum-benchmark/pkgs/container/optimum-benchmark) for testing, benchmarking and reproducibility 🐳
- Added vLLM backend for benchmarking [vLLM](https://github.com/vllm-project/vllm)'s inference engine 🚀
- Hosted the codebase of the LLM-Perf Leaderboard [LLM-Perf](https://huggingface.co/spaces/optimum/llm-perf-leaderboard) 🥇
- Hosted the codebase of the [LLM-Perf Leaderboard](https://huggingface.co/spaces/optimum/llm-perf-leaderboard) 🥇
- Added Py-TXI backend for benchmarking [Py-TXI](https://github.com/IlyasMoutawwakil/py-txi/tree/main) 🚀
- Introduced a Python API for running isolated benchmarks from the comfort of your Python scripts 🐍
- Simplified the CLI interface for running benchmarks using the Hydra CLI 🧪
Expand Down
8 changes: 4 additions & 4 deletions llm_perf/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,11 @@

from optimum_benchmark.report import BenchmarkReport

OPEN_LLM_LEADERBOARD = pd.read_csv("hf://datasets/optimum-benchmark/open-llm-leaderboard/open-llm-leaderboard.csv")


INPUT_SHAPES = {"batch_size": 1, "sequence_length": 256}
GENERATE_KWARGS = {"max_new_tokens": 64, "min_new_tokens": 64}


OPEN_LLM_LEADERBOARD = pd.read_csv("hf://datasets/optimum-benchmark/llm-perf-leaderboard/llm-df.csv")
OPEN_LLM_LIST = OPEN_LLM_LEADERBOARD.drop_duplicates(subset=["Model"])["Model"].tolist()
PRETRAINED_OPEN_LLM_LIST = (
OPEN_LLM_LEADERBOARD[OPEN_LLM_LEADERBOARD["Type"] == "pretrained"]
Expand Down Expand Up @@ -44,7 +42,9 @@
# "Qwen",
# ],
# ]
# CANONICAL_PRETRAINED_OPEN_LLM_LIST = [model for model in PRETRAINED_OPEN_LLM_LIST if model.split("/")[0] in CANONICAL_ORGANIZATIONS]
# CANONICAL_PRETRAINED_OPEN_LLM_LIST = [
# model for model in PRETRAINED_OPEN_LLM_LIST if model.split("/")[0] in CANONICAL_ORGANIZATIONS
# ]
CANONICAL_PRETRAINED_OPEN_LLM_LIST = [
"01-ai/Yi-34B",
"01-ai/Yi-6B",
Expand Down
2 changes: 1 addition & 1 deletion optimum_benchmark/version.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,4 +12,4 @@
# See the License for the specific language governing permissions and
# limitations under the License.

__version__ = "0.2.0"
__version__ = "0.2.1"
17 changes: 8 additions & 9 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,26 +98,25 @@
extras_require=EXTRAS_REQUIRE,
entry_points={"console_scripts": ["optimum-benchmark=optimum_benchmark.cli:main"]},
description="Optimum-Benchmark is a unified multi-backend utility for benchmarking "
"Transformers, Timm, Diffusers and Sentence-Transformers with full support of Optimum's "
"hardware optimizations & quantization schemes.",
long_description=open("README.md", "r", encoding="utf-8").read(),
long_description_content_type="text/markdown",
"Transformers, Timm, Diffusers and Sentence-Transformers with full support of "
"Optimum's hardware optimizations & quantization schemes.",
url="https://github.com/huggingface/optimum-benchmark",
classifiers=[
"License :: OSI Approved :: Apache Software License",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Developers",
"Operating System :: POSIX :: Linux",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"License :: OSI Approved :: Apache Software License",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
],
keywords="benchmaek, transformers, quantization, pruning, optimization, training, inference, onnx, onnx runtime, intel, "
"habana, graphcore, neural compressor, ipex, ipu, hpu, llm-swarm, py-txi, vllm, auto-gptq, autoawq, "
"sentence-transformers, bitsandbytes, codecarbon, flash-attn, deepspeed, diffusers, timm, peft",
url="https://github.com/huggingface/optimum-benchmark",
long_description=open("README.md", "r", encoding="utf-8").read(),
long_description_content_type="text/markdown",
author="HuggingFace Inc. Special Ops Team",
include_package_data=True,
name="optimum-benchmark",
Expand Down

0 comments on commit 0b24af9

Please sign in to comment.