Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix bugs in KLDivergence #35

Merged
merged 5 commits into from
Dec 31, 2021
Merged

fix bugs in KLDivergence #35

merged 5 commits into from
Dec 31, 2021

Conversation

hunto
Copy link
Contributor

@hunto hunto commented Dec 29, 2021

Motivation

  1. The computation of KLDivergence Loss is wrong (see definitions in PyTorch docs: https://pytorch.org/docs/master/generated/torch.nn.KLDivLoss.html#torch.nn.KLDivLoss).

Modification

  1. Use the api provided by PyTorch (torch.nn.functional.kl_div) to compute the KLDiv.
  2. Add a reduction parameter. By default, the reduction method is batchnorm.
  3. Use F.log_softmax instead of initializing new torch.nn.LogSoftmax module in the forward function.
  4. Modify the docs.

BC-breaking (Optional)

No BC-breaking.

Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases here and update the documentation.

Checklist

Before PR:

  • Pre-commit or other linting tools are used to fix the potential lint issues.
  • Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests.
  • The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness.
  • The documentation has been modified accordingly, like docstring or example tutorials.

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects, like MMDet or MMSeg.
  • CLA has been signed and all committers have signed the CLA in this PR.

@codecov
Copy link

codecov bot commented Dec 30, 2021

Codecov Report

Merging #35 (f58d3ca) into master (57a5549) will decrease coverage by 0.05%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #35      +/-   ##
==========================================
- Coverage   55.59%   55.54%   -0.06%     
==========================================
  Files          81       81              
  Lines        2941     2942       +1     
  Branches      544      544              
==========================================
- Hits         1635     1634       -1     
- Misses       1231     1232       +1     
- Partials       75       76       +1     
Flag Coverage Δ
unittests 55.54% <100.00%> (-0.06%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmrazor/models/losses/kl_divergence.py 100.00% <100.00%> (ø)
mmrazor/models/mutators/one_shot_mutator.py 92.59% <0.00%> (-3.71%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 57a5549...f58d3ca. Read the comment docs.

@pppppM pppppM merged commit 4611be2 into open-mmlab:master Dec 31, 2021
pppppM pushed a commit to pppppM/mmrazor that referenced this pull request Jul 15, 2022
* fix bugs in KLDivergence

* Merge branch "open-mmlab-master" into "fix_kldiv"

* fix linting errors

* fix yapf error

Co-authored-by: huangtao <huangtao@senseauto.com>
pppppM pushed a commit to pppppM/mmrazor that referenced this pull request Jul 15, 2022
* fix bugs in KLDivergence

* Merge branch "open-mmlab-master" into "fix_kldiv"

* fix linting errors

* fix yapf error

Co-authored-by: huangtao <huangtao@senseauto.com>
humu789 pushed a commit to humu789/mmrazor that referenced this pull request Feb 13, 2023
* fix custom ops support, fix multiple mark bug, add name mapping

* check if the value_info need to be added

* remove unnecessary print

* add nms implement

* two stage split wip

* add two stage split

* add split retinanet visualize

* add two stage split (wip)

* finish two stage split

* fix lint

* move parse string to mmdeploy.utils

* add func mark count dict

* use assert_cfg_valid

* update func count before add Mark

* fix dynamic shape support

* add calib data generator

* create calib dataset

* finish end2end int8

* add split two stage tensorrt visualize
humu789 pushed a commit to humu789/mmrazor that referenced this pull request Feb 13, 2023
* add gpu ci

* fix ci

* set mmcv==1.4.0 in ci config

* fix ci

* import nms in forward

* udpate

* change cuda ci

* change to cuda10.2

* change to torch1.9.0

* fix

* add cuda11.1

* add empty line
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants