-
Notifications
You must be signed in to change notification settings - Fork 223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix bugs in KLDivergence #35
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
pppppM
reviewed
Dec 29, 2021
Codecov Report
@@ Coverage Diff @@
## master #35 +/- ##
==========================================
- Coverage 55.59% 55.54% -0.06%
==========================================
Files 81 81
Lines 2941 2942 +1
Branches 544 544
==========================================
- Hits 1635 1634 -1
- Misses 1231 1232 +1
- Partials 75 76 +1
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
pppppM
pushed a commit
to pppppM/mmrazor
that referenced
this pull request
Jul 15, 2022
* fix bugs in KLDivergence * Merge branch "open-mmlab-master" into "fix_kldiv" * fix linting errors * fix yapf error Co-authored-by: huangtao <huangtao@senseauto.com>
pppppM
pushed a commit
to pppppM/mmrazor
that referenced
this pull request
Jul 15, 2022
* fix bugs in KLDivergence * Merge branch "open-mmlab-master" into "fix_kldiv" * fix linting errors * fix yapf error Co-authored-by: huangtao <huangtao@senseauto.com>
humu789
pushed a commit
to humu789/mmrazor
that referenced
this pull request
Feb 13, 2023
* fix custom ops support, fix multiple mark bug, add name mapping * check if the value_info need to be added * remove unnecessary print * add nms implement * two stage split wip * add two stage split * add split retinanet visualize * add two stage split (wip) * finish two stage split * fix lint * move parse string to mmdeploy.utils * add func mark count dict * use assert_cfg_valid * update func count before add Mark * fix dynamic shape support * add calib data generator * create calib dataset * finish end2end int8 * add split two stage tensorrt visualize
humu789
pushed a commit
to humu789/mmrazor
that referenced
this pull request
Feb 13, 2023
* add gpu ci * fix ci * set mmcv==1.4.0 in ci config * fix ci * import nms in forward * udpate * change cuda ci * change to cuda10.2 * change to torch1.9.0 * fix * add cuda11.1 * add empty line
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Motivation
Modification
torch.nn.functional.kl_div
) to compute the KLDiv.reduction
parameter. By default, the reduction method isbatchnorm
.F.log_softmax
instead of initializing newtorch.nn.LogSoftmax
module in theforward
function.BC-breaking (Optional)
No BC-breaking.
Use cases (Optional)
If this PR introduces a new feature, it is better to list some use cases here and update the documentation.
Checklist
Before PR:
After PR: