Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhance] Support evaluate on both EMA and non-EMA models. #1204

Merged
merged 2 commits into from
Dec 5, 2022

Conversation

mzr1996
Copy link
Member

@mzr1996 mzr1996 commented Nov 16, 2022

Motivation

Sometimes, we want to get the metrics of both EMA and original models during validation and testing.

Modification

Add an EMAHook in MMCLS which inherits the EMAHook in MMEngine.

BC-breaking (Optional)

Almost not, the only breaking is the modification about after_load_checkpoint. In the original logic, both the source model and the ema model load ema parameters if resume=False, which caused we cannot to access the source parameters during testing.

In this PR, this logic is moved to before_train, which means the runner.load_or_resume will always load ema parameters to the ema model and load source parameters to the source model. And only when runner.train is called and resume=False, we will load ema parameters to the source parameters

Use cases (Optional)

# By default, validate/test only the EMA models
custom_hooks = [
    dict(
        type='EMAHook',
        momentum=0.001,
        priority='ABOVE_NORMAL')
]
# Validate/test both EMA and original models
custom_hooks = [
    dict(
        type='EMAHook',
        momentum=0.001,
        priority='ABOVE_NORMAL',
        evaluate_on_origin=True)
]
# Validate/test only the original models
custom_hooks = [
    dict(
        type='EMAHook',
        momentum=0.001,
        priority='ABOVE_NORMAL',
        evaluate_on_ema=False,
        evaluate_on_origin=True)
]

Checklist

Before PR:

  • Pre-commit or other linting tools are used to fix the potential lint issues.
  • Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests.
  • The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
  • The documentation has been modified accordingly, like docstring or example tutorials.

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects, like MMDet or MMSeg.
  • CLA has been signed and all committers have signed the CLA in this PR.

@mzr1996 mzr1996 requested a review from Ezra-Yu November 16, 2022 03:00
@codecov
Copy link

codecov bot commented Nov 16, 2022

Codecov Report

Base: 0.02% // Head: 89.02% // Increases project coverage by +88.99% 🎉

Coverage data is based on head (2e3e61f) compared to base (b8b31e9).
Patch has no changes to coverable lines.

Additional details and impacted files
@@             Coverage Diff              @@
##           dev-1.x    #1204       +/-   ##
============================================
+ Coverage     0.02%   89.02%   +88.99%     
============================================
  Files          121      148       +27     
  Lines         8217    11277     +3060     
  Branches      1368     1794      +426     
============================================
+ Hits             2    10039    +10037     
+ Misses        8215      980     -7235     
- Partials         0      258      +258     
Flag Coverage Δ
unittests 89.02% <ø> (+88.99%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmcls/apis/inference.py 0.00% <0.00%> (ø)
mmcls/datasets/transforms/compose.py
mmcls/models/backbones/deit3.py 94.52% <0.00%> (ø)
mmcls/evaluation/metrics/voc_multi_label.py 100.00% <0.00%> (ø)
mmcls/models/utils/layer_scale.py 86.66% <0.00%> (ø)
mmcls/models/backbones/swin_transformer_v2.py 89.63% <0.00%> (ø)
mmcls/models/backbones/mvit.py 92.46% <0.00%> (ø)
mmcls/models/retrievers/image2image.py 92.38% <0.00%> (ø)
mmcls/models/backbones/replknet.py 93.00% <0.00%> (ø)
mmcls/models/classifiers/hugging_face.py 25.33% <0.00%> (ø)
... and 138 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

Copy link
Collaborator

@Ezra-Yu Ezra-Yu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@mzr1996 mzr1996 merged commit 7b9a101 into open-mmlab:dev-1.x Dec 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants