Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

每次训练的模型,得到的结果差别很大 #14

Open
tommying opened this issue Nov 22, 2021 · 3 comments
Open

每次训练的模型,得到的结果差别很大 #14

tommying opened this issue Nov 22, 2021 · 3 comments

Comments

@tommying
Copy link

您好!
我想问一下,每次训练得到的模型,得到的结果,差别都很大。(是不是因为 每次都是随机在一张图里cut,paste,导致的?),

比如 bottle这个类,我用相同的参数分别训练两次,得到两个模型model1和model2。
测试的时候,model1 得到的结果是AUC1 = 99.444, model2得到的结果是AUC2 = 99.841.
虽然有误差,但是这相差的也太多了。。。。。
有什么办法可以改善吗?

谢谢

@Runinho
Copy link
Owner

Runinho commented Nov 27, 2021

Hi,

Maybe follow the pytorch guide on (reproducibility)[https://pytorch.org/docs/stable/notes/randomness.html].

You should set a seed for pytorch and the random library and set the --workers 0 argument. Also disable the nondeterministic cuda algorithms (see the linked guide).

I think this would improve reproducibility.

@tommying
Copy link
Author

tommying commented Dec 2, 2021

@Runinho ok, thinks

@RuojiWang
Copy link

RuojiWang commented May 9, 2022

您好! 我想问一下,每次训练得到的模型,得到的结果,差别都很大。(是不是因为 每次都是随机在一张图里cut,paste,导致的?),

比如 bottle这个类,我用相同的参数分别训练两次,得到两个模型model1和model2。 测试的时候,model1 得到的结果是AUC1 = 99.444, model2得到的结果是AUC2 = 99.841. 虽然有误差,但是这相差的也太多了。。。。。 有什么办法可以改善吗?

谢谢

铁汁,应该是初始化的问题。按照上面老哥的说法,设定一个固定种子一般就可以避免。
歪果仁能看到你的冲文已经很不错咯~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants