Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOC] Simplify bench-ann scripts and documentation #1633

Closed
cjnolet opened this issue Jul 3, 2023 · 0 comments · Fixed by #1642
Closed

[DOC] Simplify bench-ann scripts and documentation #1633

cjnolet opened this issue Jul 3, 2023 · 0 comments · Fixed by #1642
Labels
doc Documentation

Comments

@cjnolet
Copy link
Member

cjnolet commented Jul 3, 2023

The current bench-ann scripts and workflow is pretty low-level and not as simple to use as it could be. One of the standards for benchmarking ANN algorithms in Python is the popular ann-benchmarks library- the scripts are very easy to use, the workflow is very straightforward (specifying training/search params, querying/using available algorithms and datasets, plotting). I think our bench-ann has all the available features but they could presented in a manner which is more friendly to the end-user who just wants to compare the perf of our algorithms with others and doesn't necessarily want to jump through hoops to do so.

I propose we write scripts that mimic the ann-benchmarks as much as possible so that users already familiar with that repository can more easily use our stuff. This is somewhat how the big-ann-benchmarks scripts functioned as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
doc Documentation
Projects
Development

Successfully merging a pull request may close this issue.

1 participant