Skip to content

Easily evaluate machine learning models on public benchmarks

License

Notifications You must be signed in to change notification settings

xperience-ai/sotabench-eval

 
 

Repository files navigation


PyPI version Generic badge

sotabencheval is a framework-agnostic library that contains a collection of deep learning benchmarks you can use to benchmark your models. It can be used in conjunction with the sotabench service to record results for models, so the community can compare model performance on different tasks, as well as a continuous integration style service for your repository to benchmark your models on each commit.

Benchmarks Supported

PRs welcome for further benchmarks!

Installation

Requires Python 3.6+.

pip install sotabench-eval

Get Benching! 🏋️

You should read the full documentation here, which contains guidance on getting started and connecting to sotabench.

Integration is lightweight. For example, if you are evaluating an ImageNet model, you initialize an Evaluator object and (optionally) link to any linked paper:

from sotabencheval.image_classification import ImageNetEvaluator
evaluator = ImageNetEvaluator(
             model_name='FixResNeXt-101 32x48d',
             paper_arxiv_id='1906.06423')

Then for each batch of predictions your model makes on ImageNet, pass a dictionary of keys as image IDs and values as a np.ndarrays of logits to the evaluator.add method:

evaluator.add(output_dict=dict(zip(image_ids, batch_output)))

The evaluation logic just needs to be written in a sotabench.py file and sotabench will run it on each commit and record the results:

Contributing

All contributions welcome!

About

Easily evaluate machine learning models on public benchmarks

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 99.2%
  • Makefile 0.8%