You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 13, 2023. It is now read-only.
I am very interested in your paper. But when it comes to accuracy assessment, I have some problems. You used "prfs(labels.data.cpu().numpy().flatten(), cd_preds.data.cpu().numpy().flatten(), average='binary', pos_label=1 in train )" for accuracy evaluation, and use "tn, fp, fn, tp =
confusion_matrix(labels.data.cpu().numpy().flatten(),cd_preds.data.cpu().numpy().flatten()).ravel()" in eval. During the calculation, I found that the accuracy of the two methods differed by up to 10%. May I ask why the accuracy of the two methods is so different. Which method is more reliable for accuracy assessment?
Looking forward to your answer, thank you!
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I am very interested in your paper. But when it comes to accuracy assessment, I have some problems. You used "prfs(labels.data.cpu().numpy().flatten(), cd_preds.data.cpu().numpy().flatten(), average='binary', pos_label=1 in train )" for accuracy evaluation, and use "tn, fp, fn, tp =
confusion_matrix(labels.data.cpu().numpy().flatten(),cd_preds.data.cpu().numpy().flatten()).ravel()" in eval. During the calculation, I found that the accuracy of the two methods differed by up to 10%. May I ask why the accuracy of the two methods is so different. Which method is more reliable for accuracy assessment?
Looking forward to your answer, thank you!
The text was updated successfully, but these errors were encountered: