You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have a few hundred audio classification models from transformerson the Hub and it would be nice if one could easily evaluate them for accuracy, F1 score etc with an evaluator.
The pipeline in question is audio-classification and the relevant docs are here.
A good model/dataset pair to test is this one for keyword spotting (ks):
We have a few hundred audio classification models from
transformers
on the Hub and it would be nice if one could easily evaluate them for accuracy, F1 score etc with an evaluator.The pipeline in question is
audio-classification
and the relevant docs are here.A good model/dataset pair to test is this one for keyword spotting (ks):
superb/wav2vec2-base-superb-ks
: https://huggingface.co/superb/wav2vec2-base-superb-ksks
config ofsuperb
: https://huggingface.co/datasets/superbTogether with #324, this evaluator would bring coverage to all audio
transformers
models on the Hub 🔥The text was updated successfully, but these errors were encountered: