Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to reduce the big experiment randomness? #25

Open
jingliang95 opened this issue Oct 28, 2021 · 3 comments
Open

How to reduce the big experiment randomness? #25

jingliang95 opened this issue Oct 28, 2021 · 3 comments

Comments

@jingliang95
Copy link

Using the code, I observe very big experimental randomness. For example, on QNRF dataset, I obtain results on test set as follows (MAE and MSE):
run 1: 87.621, 149.75
run 2: 92.988, 168.47
run 3: 96.175, 167.79

In the paper, 85.6 and 148.3 are reported. May I ask if the authors have some ideas to reduce the big experiment randomness? With this big randomness, how can we draw conclusion on which model performs well and which doesn't?

Thanks a lot.

@Boyu-Wang
Copy link
Collaborator

Could you also share what's the version of pytorch, cuda, GPU you are using?

@jingliang95
Copy link
Author

Sure, pytorch version: 1.7.1, cuda: 11.1, GPU: V100.

Did you repeat your experiments on QNRF? If so, can you report the results for different runs? Thanks a lot.

@midasklr
Copy link

I met the same problem with sha dataset, I event got better mae(57.72) and mse(93.81) than the paper(59.7 and 95.7) when I change some parameters...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants