Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CLI to evaluate models, list supported tasks etc #53

Closed
lewtun opened this issue Feb 26, 2024 · 1 comment
Closed

Add CLI to evaluate models, list supported tasks etc #53

lewtun opened this issue Feb 26, 2024 · 1 comment
Labels
feature request New feature/request

Comments

@lewtun
Copy link
Member

lewtun commented Feb 26, 2024

If would be nice if one could launch evaluation jobs directly from the command line with something like:

lighteval --tasks="lighteval|hellaswag|5|1" --output_dir "/scratch/evals" --model_args "pretrained=gpt2

By default we could look for the default accelerate config, but allow users to override this if needed with

lighteval --accelerate_config=path/to/accelerate/config --tasks="lighteval|hellaswag|5|1" --output_dir "/scratch/evals" --model_args "pretrained=gpt2

It would also be nice if the CLI would produce a list of supported tasks with something like

lighteval --list-tasks
@clefourrier clefourrier added the feature request New feature/request label Feb 27, 2024
@clefourrier
Copy link
Member

We now have a better CLI and can show supported task. #228 will soon allow to display samples too, so I'm closing :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature/request
Projects
None yet
Development

No branches or pull requests

2 participants