We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
If would be nice if one could launch evaluation jobs directly from the command line with something like:
lighteval --tasks="lighteval|hellaswag|5|1" --output_dir "/scratch/evals" --model_args "pretrained=gpt2
By default we could look for the default accelerate config, but allow users to override this if needed with
accelerate
lighteval --accelerate_config=path/to/accelerate/config --tasks="lighteval|hellaswag|5|1" --output_dir "/scratch/evals" --model_args "pretrained=gpt2
It would also be nice if the CLI would produce a list of supported tasks with something like
lighteval --list-tasks
The text was updated successfully, but these errors were encountered:
We now have a better CLI and can show supported task. #228 will soon allow to display samples too, so I'm closing :)
Sorry, something went wrong.
No branches or pull requests
If would be nice if one could launch evaluation jobs directly from the command line with something like:
By default we could look for the default
accelerate
config, but allow users to override this if needed withIt would also be nice if the CLI would produce a list of supported tasks with something like
The text was updated successfully, but these errors were encountered: