You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @quancs, From the issue that you linked it seems that the solution from the author is basically to do the evaluation in double instead of float. I can confirm that doing this fixes the example you send. Do you think we should cast the users input here:
to double instead of float. Alternatively, we can insert note in docstring that in some cases it is better to evaluate using double precision.
This issue happens on the torch version on cpu. On GPU it's OK.
And I don't see any vialation in my past experiment results tested on GPU.
I have some ideas to fix this:
convert to double anyway, no matter on CPU or GPU, but it may make the metric slow
convert to double on CPU, but it may make the metric slow, and we don't know whether on GPU is really OK
convert to double when we detect the result is not a valid number (NaN or Inf), and run again.
I prefer 3). what do you think? or do you have other ideas?
🐛 Bug
This issue is related with
fast-bss-eval
's torch version, see fakufaku/fast_bss_eval#5To Reproduce
outputs:
unzip data.zip to get the
debug.npz
Code sample
Expected behavior
the results given by signal_distortion_ratio is close to the one given by
mir_eval
Environment
conda
,pip
, build command if you used source):Additional context
The text was updated successfully, but these errors were encountered: