Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

batchsize和iters以及epoch问题 #32

Closed
absqh opened this issue May 28, 2024 · 8 comments
Closed

batchsize和iters以及epoch问题 #32

absqh opened this issue May 28, 2024 · 8 comments

Comments

@absqh
Copy link

absqh commented May 28, 2024

想问一下作者是使用什么机器进行的实验,我使用单卡4090在bcd任务上跑SYSU数据集的话,batchsize只能设置为8,那么max_iters是不是应该改为640000(论文中训练iters是设置为20000,readme提供的命令行中bs=16,max_iters=320000),这样才能和epoch对应起来,还是说我只需要跟论文中的iters=20000对应起来即可,这样的话我就设置max_iters=40000,相比起来能缩短十几倍计算成本

@ChenHongruixuan
Copy link
Owner

Hi,

Thank you so much for your question! There is no need for you to change the max_iters. Just keep the same value as 320000. The specific number of iterations will be max_iters / batch size.

Best,

@absqh
Copy link
Author

absqh commented May 29, 2024

感谢你的回复。但我有一些困惑,如果要复现您的论文实验不是应该对齐epoch吗?还是说只需要对其实验的具体迭代次数?如果我保持320000不变的话岂不是前两个都会有所变化吗?

@ChenHongruixuan
Copy link
Owner

Thanks for your question. The epoch is only aligned while keeping max_iters constant. You can manually calculate it yourself.

For example, in my case, the batch size is 16, then the iteration will be 320000/16 = 20000. For your case, the value is 320000 / 8 = 40000. Although the final number of iterations is different, the network is exposed to the same amount of samples in both cases. This aligns the underlying reason of keeping the number of epochs consistent.

Best,

@absqh
Copy link
Author

absqh commented May 29, 2024

谢谢,我明白了,之前这个逻辑没有搞清楚。320000/16 = 20000这个公式的代码实现可以告诉我在哪里吗?

@ChenHongruixuan
Copy link
Owner

Glad to hear that. Please refer to the code of dataset/dataloder.

  • if max_iters is not None:
    self.data_list = self.data_list * int(np.ceil(float(max_iters) / len(self.data_list)))
    self.data_list = self.data_list[0:max_iters]

@heikeyuhuajia
Copy link

Thanks for your question. The epoch is only aligned while keeping max_iters constant. You can manually calculate it yourself.谢谢你的问题。纪元仅在保持 max_iters 不变的情况下对齐。您可以自己手动计算。

For example, in my case, the batch size is 16, then the iteration will be 320000/16 = 20000. For your case, the value is 320000 / 8 = 40000. Although the final number of iterations is different, the network is exposed to the same amount of samples in both cases. This aligns the underlying reason of keeping the number of epochs consistent.例如,在我的情况下,批次大小为 16,则迭代将为 320000/16 = 20000。对于您的情况,该值为 320000 / 8 = 40000。尽管最终迭代次数不同,但在这两种情况下,网络都会暴露给相同数量的样本。这与保持 epoch 数一致的根本原因保持一致。

Best, 最好

Very nice work. However, I am still confused about your statement ‘Although the final number of iterations is different, the network is exposed to the same amount of samples in both cases’. ‘When max_iters=320000 and batch_size=8, there will be 40000 iterations, and each iteration will be a parameter update, which means 40000 updates. And when batch_size= 16, it will update the parameters 20000 times. Are these two consistent?

@ChenHongruixuan
Copy link
Owner

Hello and thank you for pointing this out! You are correct. From the point of view of the sample information, they are the same. However, there may be some differences if you take into account the adaptive optimizer. However, according to our experiments, the final accuracy will not be much different.

@heikeyuhuajia
Copy link

Thank you for your reply, it gives me a new understanding. Once again, congratulations on your work, which I will follow and continue to explore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants