You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Many thanks for this cool work!
I have two questions regarding the particle filtering (PF) model:
The model does not produce same results for the same activation functions of a same track. In the attached jupyter notebook (pf-repeat-issue.ipynb), I run PF on a same activation function for five times, and get different results.
The model does not work on `ideal activation function'. In the pf-groundtruth-issue.ipynb, I generate an ideal activation function using beat annotations, which would only have peaks at beat positions. But the PF generate very low Recall for that.
Are these issues expected because of the sampling process of PF? Or, is there any way we may avoid/alleviate these issues? Also, is there any idea regarding the variance/std of the PF performance under different conditions (e.g., genres)?
Just want to make sure I didn't use your model wrong. Thank you!
The text was updated successfully, but these errors were encountered:
Hi,
Many thanks for this cool work!
I have two questions regarding the particle filtering (PF) model:
The notebooks are shared via google drive: https://drive.google.com/drive/folders/1_H8u847bVnUP7Lfome8WuO98FNaU4Jew?usp=sharing
Are these issues expected because of the sampling process of PF? Or, is there any way we may avoid/alleviate these issues? Also, is there any idea regarding the variance/std of the PF performance under different conditions (e.g., genres)?
Just want to make sure I didn't use your model wrong. Thank you!
The text was updated successfully, but these errors were encountered: