-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RoiAlign CPU EP add warning for max mode with samples != 1 #12136
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…tensor, which can be randomly generated from onnxruntime_perf_test or even malicious data
hariharans29
approved these changes
Jul 11, 2022
The "Linux GPU CI Pipeline" is showing red on this page (some infrastructure issue "Artifact drop-linux already exists for build 687812"), but an explicit new run of the pipeline on the PR branch is green https://dev.azure.com/onnxruntime/onnxruntime/_build/results?buildId=688606&view=results. For some reason GitHub's result are not showing the updated status 🤷♂️. |
RandySheriffH
pushed a commit
that referenced
this pull request
Jul 18, 2022
* RoiAlign add warning about incorrect max summation when sample size not 1
RandySheriffH
added a commit
that referenced
this pull request
Jul 19, 2022
* support optimizer opt for deepspeed 0.5.9 * resolve comments * resolve comments * FP16_Optimizer Support for more Deepspeed Versions (#12046) * fp16_optimizer for more ds versions * change ds version * bugfix * fix bug * Fix unused function warning for decodeMIDR(). (#12069) Changed from static function defined in header to function declared in header and defined in separate .cc file. * pin protobuf version to be compatible with onnx (#12132) Co-authored-by: Ashwini Khade <askhade@microsoft.com@orttrainingdev10.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net> * RoiAlign CPU EP add warning for max mode with samples != 1 (#12136) * RoiAlign add warning about incorrect max summation when sample size not 1 * include coreml_provider_factory.h in macos build instead of coreml_ex… (#12138) include coreml_provider_factory.h in macos build instead of coreml_execution_provider.h * List 3.10 as supported python version and remove 3.6 (#12141) list 3.10 as supported python version and remove 3.6 Co-authored-by: Randy Shuai <rashuai@microsoft.com> * Use updated symbolic_helper.check_training_mode (#11900) Co-authored-by: Jingyan Wang, Baiju Meswani * Fix GH issue 12151 by using inverse perms for updating DQ axis attribute (#12158) * Fix GH issue 12151. Need to use inverse perms for updating that axis to what is used for transposing the input. This only applies if the DQ node is doing per-axis dequantization. * fixing positions for beam search gpt2 (#12156) * fixing positions for beam search gpt2 Co-authored-by: Tianlei Wu <tlwu@microsoft.com> * remove wrong placed libs (#12201) * Add file mapping for windows platform. (#12183) * Add file mapping for windows platform. * Add unit test for file mapping for windows. Also add an error message for mis-aligned offset * Add unit test for file mapping for windows. Also add an error message for mis-aligned offset * Update data type to avoid warnings * Compitable data type to avoid warnings. Update CreatFileMapping2 condition for winml compiling. * Add type conversion to avoid warnings for X86 release build. Co-authored-by: Ting Cao <ticao@microsoft.com> * Fix bug where onnxruntime_USE_NCCL flag would default to ON (#12195) Fix bug where onnxruntime_USE_NCCL flag would default to ON, causing ORT to not build properly. New functionality: flag is ON when training is enabled and NCCL is not disabled. Flag is OFF otherwise Co-authored-by: zhijxu <zhijxu@microsoft.com> Co-authored-by: zhijxu <zhijxu> Co-authored-by: Vincent Wang <wangwchpku@outlook.com> Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com> Co-authored-by: Ashwini Khade <askhade@microsoft.com> Co-authored-by: Ashwini Khade <askhade@microsoft.com@orttrainingdev10.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net> Co-authored-by: Dwayne Robinson <dwayner@microsoft.com> Co-authored-by: Carson Swope <carsonswope@users.noreply.github.com> Co-authored-by: Randy Shuai <rashuai@microsoft.com> Co-authored-by: jingyanwangms <47403504+jingyanwangms@users.noreply.github.com> Co-authored-by: Scott McKay <skottmckay@gmail.com> Co-authored-by: Viswanath Boga <44417868+viboga@users.noreply.github.com> Co-authored-by: leqiao-1 <61653207+leqiao-1@users.noreply.github.com> Co-authored-by: caoting-dotcom <71617901+caoting-dotcom@users.noreply.github.com> Co-authored-by: Ting Cao <ticao@microsoft.com> Co-authored-by: Sean Murray <59740888+seanmurr1@users.noreply.github.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description: The CPU EP implementation of RoiAlign is incorrect per ONNX for
mode
==max
when the sample count is > 1 either whensampling_ratio
is explicitly > 1 or when thesampling_ratio
== 0 and is dynamically computed > 1. See #6146. Emit a warning for compatibility in ORT 1.12, as it will be fixed in ORT 1.13 via #7354.Motivation and Context
A warning should be emitted so people have some time to update their models, such as
mask_rcnn_R_50_FPN_1x.onnx
which shows small differences in model output (this change mimic's Jacky's Add warning about future computation change for ConvTranspose with auto_pad #11984).Dubious implementation of ROIAlign in 'max' mode #6146
The actual fix is PR 7354, which introduces small differences in the output.