Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sed_eval bindings? #184

Open
bmcfee opened this issue Feb 22, 2018 · 6 comments
Open

sed_eval bindings? #184

bmcfee opened this issue Feb 22, 2018 · 6 comments
Assignees
Labels
enhancement interoperability Making JAMS play nice with other packages

Comments

@bmcfee
Copy link
Contributor

bmcfee commented Feb 22, 2018

[Tagging @justinsalamon ]

The jams.eval module provides a unified interface between jams annotations and mir_eval metrics. Would it be possible to add bindings to sed_eval as well, for evaluating tag_* annotations? I haven't used sed_eval directly, but this seems like it would be useful for handling things like instrument detection.

@justinsalamon
Copy link
Contributor

I think this could be of interest, not only for mir tasks (e.g. instrument ID) but also directly for environmental sound eval via JAMS files.

A possible complication however is that if I remember correctly the sed_eval paradigm is that given a collection of recordings for evaluation (a test set), intermediate metrics are aggregated from all files before computing the final set of metrics. This might be a little tricky to support given the current collection-agnostic paradigm JAMS follows?

@bmcfee
Copy link
Contributor Author

bmcfee commented Feb 22, 2018

I would just simplify that to a collection of 1 for each call to eval.tag. Does that seem reasonable?

@justinsalamon
Copy link
Contributor

Yes and no. Yes in that it allows you to get per-file scores. No in that averaging per-file scores will give you a different result compared to computing per-file intermediate stats and then a final set of metrics, where the latter is what the sed_eval folks (and consequently DCASE) are advocating for. By only supporting per-file metrics, we might encourage someone to do the former, which would give them eval results that are inconsistent with what's expected in the literature (or what will eventually be expected once the dust settles on SED).

@bmcfee
Copy link
Contributor Author

bmcfee commented Feb 22, 2018

Well, I've never liked the collection-wise error reporting, but you could handle it gracefully by accepting a sed_eval object as an optional parameter. If none is provided, one is constructed. That way, you can get track-wise metrics easily, and collection-wise metrics with a bit more work.

@justinsalamon
Copy link
Contributor

That sounds like a reasonable solution to me

@bmcfee bmcfee added this to the 0.3.2 milestone Feb 22, 2018
@bmcfee bmcfee self-assigned this Feb 23, 2018
@bmcfee
Copy link
Contributor Author

bmcfee commented Apr 13, 2018

(Delayed update)

This one is stalled for a couple of reasons relating to the sed_eval dependency chain.

@bmcfee bmcfee removed this from the 0.3.2 milestone Apr 13, 2018
@bmcfee bmcfee added the interoperability Making JAMS play nice with other packages label Aug 12, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement interoperability Making JAMS play nice with other packages
Projects
None yet
Development

No branches or pull requests

2 participants