-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Partially freeze SWIN backbone #8208
Comments
@BIGWangYuDong any updates on this? |
any updates? @BIGWangYuDong |
I get the same error when using I believe this can be fixed by adding |
@austinmw I got the same error. Your code fix it. Thank you very much |
liuchang0523
added a commit
to liuchang0523/mmyolo
that referenced
this issue
Dec 7, 2023
Solve the problem: RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicate s that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by making sure all `forward` function outputs participate in calculating loss. reference: open-mmlab/mmdetection#8208 (comment)
This was referenced Dec 7, 2023
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi, can I check how we can partially freeze a SWIN backbone? I've tried adding
frozen_stages=3
to the backbone in the config, but met with the following error:Is there something else that needs to be set? Thank you!
The text was updated successfully, but these errors were encountered: