Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Refactor] Update analysis tools and documentations. #1359

Merged
merged 3 commits into from
Feb 15, 2023

Conversation

mzr1996
Copy link
Member

@mzr1996 mzr1996 commented Feb 9, 2023

Motivation

Some analysis tools are not available and documentations are outdated.

Modification

Update these analysis tools and documentation.

BC-breaking

The interface of the tools/analysis_tools/eval_metric.py is changed.

In the new eval_metric.py, you don't need to specify the config file, and accordingly, you need to use
--metric argument to specify the metrics to calculate. For example:

# Eval the top-1 and top-5 accuracy
python tools/analysis_tools/eval_metric.py results.pkl --metric type=Accuracy topk=1,5

# Eval accuracy, precision, recall and f1-score
python tools/analysis_tools/eval_metric.py results.pkl --metric type=Accuracy \
    --metric type=SingleLabelMetric items=precision,recall,f1-score

And the output of mmcls.utils.load_json_log changed to the below format to fit the new log format.

{
    'train': [
        {"lr": 0.1, "time": 0.02, "epoch": 1, "step": 100},
        {"lr": 0.1, "time": 0.02, "epoch": 1, "step": 200},
        {"lr": 0.1, "time": 0.02, "epoch": 1, "step": 300},
        ...
    ]
    'val': [
        {"accuracy/top1": 32.1, "step": 1},
        {"accuracy/top1": 50.2, "step": 2},
        {"accuracy/top1": 60.3, "step": 2},
        ...
    ]
}

Checklist

Before PR:

  • Pre-commit or other linting tools are used to fix the potential lint issues.
  • Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests.
  • The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
  • The documentation has been modified accordingly, like docstring or example tutorials.

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects, like MMDet or MMSeg.
  • CLA has been signed and all committers have signed the CLA in this PR.

@codecov
Copy link

codecov bot commented Feb 9, 2023

Codecov Report

Base: 0.02% // Head: 86.87% // Increases project coverage by +86.85% 🎉

Coverage data is based on head (725810c) compared to base (b8b31e9).
Patch has no changes to coverable lines.

❗ Current head 725810c differs from pull request most recent head 4bffea1. Consider uploading reports for the commit 4bffea1 to get more accurate results

Additional details and impacted files
@@             Coverage Diff              @@
##           dev-1.x    #1359       +/-   ##
============================================
+ Coverage     0.02%   86.87%   +86.85%     
============================================
  Files          121      168       +47     
  Lines         8217    13685     +5468     
  Branches      1368     2181      +813     
============================================
+ Hits             2    11889    +11887     
+ Misses        8215     1428     -6787     
- Partials         0      368      +368     
Flag Coverage Δ
unittests 86.87% <ø> (+86.85%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmcls/datasets/transforms/compose.py
mmcls/models/backbones/efficientformer.py 95.08% <0.00%> (ø)
mmcls/models/backbones/vig.py 18.85% <0.00%> (ø)
mmcls/models/backbones/beit.py 57.06% <0.00%> (ø)
mmcls/engine/hooks/margin_head_hooks.py 92.59% <0.00%> (ø)
mmcls/utils/analyze.py 100.00% <0.00%> (ø)
mmcls/models/utils/norm.py 80.00% <0.00%> (ø)
mmcls/apis/model.py 87.09% <0.00%> (ø)
mmcls/models/tta/score_tta.py 100.00% <0.00%> (ø)
mmcls/models/classifiers/timm.py 25.97% <0.00%> (ø)
... and 159 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@Ezra-Yu Ezra-Yu self-requested a review February 15, 2023 02:04
Copy link
Collaborator

@Ezra-Yu Ezra-Yu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@mzr1996 mzr1996 merged commit bedf4e9 into open-mmlab:dev-1.x Feb 15, 2023
@lb-hit
Copy link

lb-hit commented Mar 8, 2023

image

image

hi, I got this three numbers by running the code. My datasets has 15 classes , what's this 3 numbers meaning?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants