Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix cast removal bug #17953

Merged
merged 6 commits into from
Oct 31, 2023
Merged

Conversation

adityagoel4512
Copy link
Contributor

@adityagoel4512 adityagoel4512 commented Oct 15, 2023

Description

The RemoveDuplicateCastTransformer fairly naively removed Cast nodes from the graph without considering precision loss when using the same TypeGroup. For instance, F64 -> F32 -> F64 would be optimised out of the graph.

I also noticed that signedness was not accounted for, which is not covered by any existing issue but is a problem. For example doing int -> unsigned int -> int produces very different values for negative inputs and so should not be optimised out

One could argue that we shouldn't be performing such cast elimination at all (at least not in this transformer). The original scope might be well restricted to only eliminating unnecessary casts from the InsertCastTransformer and no others.

Motivation and Context

This should fix #17565, #9915 and #8787.

@tianleiwu
Copy link
Contributor

/azp run Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline, Linux OpenVINO CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, onnxruntime-binary-size-checks-ci-pipeline, Linux QNN CI Pipeline

@tianleiwu
Copy link
Contributor

/azp run Windows CPU CI Pipeline, Windows GPU CI Pipeline, Windows GPU TensorRT CI Pipeline, Windows ARM64 QNN CI Pipeline, orttraining-linux-ci-pipeline, orttraining-linux-gpu-ci-pipeline, orttraining-ortmodule-distributed, ONNX Runtime React Native CI Pipeline, Windows x64 QNN CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 9 pipeline(s).

1 similar comment
@azure-pipelines
Copy link

Azure Pipelines successfully started running 9 pipeline(s).

@adityagoel4512
Copy link
Contributor Author

@tianleiwu could you retrigger CI please?

@yuslepukhin
Copy link
Member

/azp run Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline, Linux OpenVINO CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, onnxruntime-binary-size-checks-ci-pipeline, Linux QNN CI Pipeline

@yuslepukhin
Copy link
Member

/azp run Windows CPU CI Pipeline, Windows GPU CI Pipeline, Windows GPU TensorRT CI Pipeline, Windows ARM64 QNN CI Pipeline, orttraining-linux-ci-pipeline, orttraining-linux-gpu-ci-pipeline, orttraining-ortmodule-distributed, ONNX Runtime React Native CI Pipeline, Windows x64 QNN CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 9 pipeline(s).

1 similar comment
@azure-pipelines
Copy link

Azure Pipelines successfully started running 9 pipeline(s).

@adityagoel4512
Copy link
Contributor Author

@yuslepukhin the Windows_CI / Onnxruntime-TVM CI is also failing on the main branch. Is this PR ready to be merged?

@tianleiwu
Copy link
Contributor

This change might have impact on some model. The choice of conservative and aggressive Cast remover might depend on precision requirement of an application. For example, double->float->double might not matter much for some model.

Shall we add some configuration so that user can choose between conservative and aggressive Cast removing?

@adityagoel4512
Copy link
Contributor Author

adityagoel4512 commented Oct 19, 2023

I would characterise what was happening as a bug. The point of RemoveDuplicateCastTransformer was really to remove extra casts inserted by InsertCastTransformer, not to be an optimisation pass (see

// if we had multiple nodes in a row that were converted to fp32 we will have casts around every node.
// Casts in between converted nodes cancel each other out and can be removed.
// e.g.
// -> NodeA(fp16) -> NodeB(fp16) ->
// After converting both to fp32
// -> CastToFp32 -> NodeA(fp32) -> CastToFp16 -> CastToFp32 -> NodeB(fp32) -> CastToFp16
// After running duplicate cast removal
// -> CastToFp32 -> NodeA(fp32) -> NodeB(fp32) -> CastToFp16
//
). It runs when optimisations are disabled so should really only be for safe removals in my opinion.

If it is desirable for some models perhaps a future PR could implement an explicit cast optimisation pass?

@tianleiwu
Copy link
Contributor

If it is desirable for some models perhaps a future PR could implement an explicit cast optimisation pass?
Sounds good to me.

@adityagoel4512
Copy link
Contributor Author

@tianleiwu @yuslepukhin unless you have any concerns could we merge this?

@tianleiwu
Copy link
Contributor

@adityagoel4512, please merge microsoft:main otherwise there is a required pipeline cannot pass.

@tianleiwu
Copy link
Contributor

/azp run Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline, Linux OpenVINO CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, onnxruntime-binary-size-checks-ci-pipeline, Linux QNN CI Pipeline

@tianleiwu
Copy link
Contributor

/azp run Windows CPU CI Pipeline, Windows GPU CI Pipeline, Windows GPU TensorRT CI Pipeline, Windows ARM64 QNN CI Pipeline, orttraining-linux-ci-pipeline, orttraining-linux-gpu-ci-pipeline, orttraining-ortmodule-distributed, ONNX Runtime React Native CI Pipeline, Windows x64 QNN CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 9 pipeline(s).

1 similar comment
@azure-pipelines
Copy link

Azure Pipelines successfully started running 9 pipeline(s).

@tianleiwu
Copy link
Contributor

LGTM

@tianleiwu
Copy link
Contributor

/azp run Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline, Linux OpenVINO CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, onnxruntime-binary-size-checks-ci-pipeline, Linux QNN CI Pipeline

@tianleiwu
Copy link
Contributor

/azp run Windows CPU CI Pipeline, Windows GPU CI Pipeline, Windows GPU TensorRT CI Pipeline, Windows ARM64 QNN CI Pipeline, orttraining-linux-ci-pipeline, orttraining-linux-gpu-ci-pipeline, orttraining-ortmodule-distributed, ONNX Runtime React Native CI Pipeline, Windows x64 QNN CI Pipeline

Copy link

Azure Pipelines successfully started running 9 pipeline(s).

1 similar comment
Copy link

Azure Pipelines successfully started running 9 pipeline(s).

@tianleiwu tianleiwu merged commit ed41a28 into microsoft:main Oct 31, 2023
62 of 65 checks passed
@adityagoel4512 adityagoel4512 deleted the fix-casting-optimisation branch January 18, 2024 16:48
renovate bot added a commit to ziyadedher/ziyadedher that referenced this pull request Feb 26, 2024
[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Change | Age | Adoption | Passing | Confidence |
|---|---|---|---|---|---|
| [onnxruntime-web](https://togithub.com/Microsoft/onnxruntime) |
[`1.16.3` ->
`1.17.1`](https://renovatebot.com/diffs/npm/onnxruntime-web/1.16.3/1.17.1)
|
[![age](https://developer.mend.io/api/mc/badges/age/npm/onnxruntime-web/1.17.1?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/onnxruntime-web/1.17.1?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/onnxruntime-web/1.16.3/1.17.1?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/onnxruntime-web/1.16.3/1.17.1?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Release Notes

<details>
<summary>Microsoft/onnxruntime (onnxruntime-web)</summary>

###
[`v1.17.0`](https://togithub.com/microsoft/onnxruntime/releases/tag/v1.17.0):
ONNX Runtime v1.17.0

[Compare
Source](https://togithub.com/Microsoft/onnxruntime/compare/v1.16.3...v1.17.0)

### Announcements

In the next release, we will totally drop support for Windows ARM32.

### General

- Added support for new ONNX 1.15 opsets:
[IsInf-20](https://togithub.com/onnx/onnx/blob/main/docs/Changelog.md#IsInf-20),
[IsNaN-20](https://togithub.com/onnx/onnx/blob/main/docs/Changelog.md#IsNaN-20),
[DFT-20](https://togithub.com/onnx/onnx/blob/main/docs/Changelog.md#DFT-20),
[ReduceMax-20](https://togithub.com/onnx/onnx/blob/main/docs/Changelog.md#ReduceMax-20),
[ReduceMin-20](https://togithub.com/onnx/onnx/blob/main/docs/Changelog.md#reducemin-20),
[AffineGrid-20](https://togithub.com/onnx/onnx/blob/main/docs/Changelog.md#AffineGrid-20),
[GridSample](https://togithub.com/onnx/onnx/blob/main/docs/Operators.md#GridSample),
[ConstantOfShape-20](https://togithub.com/onnx/onnx/blob/main/docs/Changelog.md#ConstantOfShape-20),
[RegexFullMatch](https://togithub.com/onnx/onnx/blob/main/docs/Operators.md#RegexFullMatch),
[StringConcat](https://togithub.com/onnx/onnx/blob/main/docs/Operators.md#StringConcat),
[StringSplit](https://togithub.com/onnx/onnx/blob/main/docs/Operators.md#StringSplit),
and
[ai.onnx.ml.LabelEncoder-4](https://togithub.com/onnx/onnx/blob/main/docs/Changelog-ml.md#ai.onnx.ml.LabelEncoder-4).
- Updated C/C++ libraries: abseil, date, nsync, googletest, wil, mp11,
cpuinfo, safeint, and onnx.

### Build System and Packages

- Dropped CentOS 7 support. All Linux binaries now require glibc version
>=2.28, but users can still build the source code for a lower glibc
version.
-   Added CUDA 12 packages for Python and Nuget.
- Added Python 3.12 packages for ONNX Runtime Inference. ONNX Runtime
Training Python 3.12 packages cannot be provided at this time since
training packages depend on PyTorch, which does not support Python 3.12
yet.
- Linux binaries (except those in AMD GPU packages) are built in a more
secure way that is compliant with BinSkim's default policy (e.g., the
binaries no longer have an executable stack).
- Added support for Windows ARM64X for users who build ONNX Runtime from
source. No prebuilt package provided yet.
- Removed Windows ARM32 binaries from official packages. Users who still
need these binaries can build them from source.
-   Added AMD GPU package with ROCm and MiGraphX (Python + Linux only).
-   Split ONNX Runtime GPU Nuget package into two packages.
- When building the source code for Linux ARM64 or Android, the C/C++
compiler must support BFloat16. Support for Android NDK 24.x has been
removed. Please use NDK 25.x or 26.x instead.
- Link time code generation (LTCG or LTO) is now disabled by default
when building from source. To re-enable it, users can add "--enable_lto"
to the build command. All prebuilt binaries are still built with LTO.

### Core

-   Optimized graph inlining.
-   Allow custom op to invoke internal thread-pool for parallelism.
-   Added support for supplying a custom logger at the session level.
- Added new logging and tracing of session and execution provider
options.
- Added new dynamic ETW provider that can trace/diagnose ONNX internals
while maintaining great performance.

### Performance

-   Added 4bit quant support on NVIDIA GPU and ARM64.

### EPs

#### TensorRT EP

- Added support for direct load of precompiled TensorRT engines and
customizable engine prefix.
-   Added Python support for TensorRT plugins via ORT custom ops.
-   Fixed concurrent Session::Run bugs.
- Updated calls to deprecated TensorRT APIs (e.g., enqueue_v2 →
enqueue_v3).
-   Fixed various memory leak bugs.

#### QNN EP

-   Added support for QNN SDK 2.18.
-   Added context binary caching and model initialization optimizations.
-   Added mixed precision (8/16 bit) quantization support.
- Add device-level session options (soc_model, htp_arch, device_id),
extreme_power_saver for htp_performance_mode, and vtcm_mb settings.
-   Fixed multi-threaded inference bug.
-   Fixed various other bugs and added performance improvements.
- QNN profiling of the NPU can be enabled dynamically with ETW or write
out to CSV.

#### OpenVINO EP

-   Added support for OpenVINO 2023.2.
- Added AppendExecutionProvider_OpenVINO_V2 API for supporting new
OpenVINO EP options.

#### DirectML EP

- Updated to [DirectML
1.13.1](https://togithub.com/microsoft/DirectML/blob/master/Releases.md).
-   Updated operators LpPool-18 and AveragePool-19 with dilations.
-   Improved Python I/O binding support.
-   Added RotaryEmbedding.
-   Added support for fusing subgraphs into DirectML execution plans.
- Added new Python API to choose a specific GPU on multi-GPU devices
with the DirectML EP.

### Mobile

-   Added initial support for 4bit quantization on ARM64.
-   Extended CoreML/NNAPI operator coverage.
-   Added support for YOLOv8 pose detection pre/post processing.
-   Added support for macOS in CocoaPods package.

### Web

-   Added support for external data format.
-   Added support for I/O bindings.
-   Added support for training.
-   Added WebGPU optimizations.
-   Transitioned WebGPU out of experimental.
-   Added FP16 support for WebGPU.

### Training

#### Large Model Training

-   Enabled support for QLoRA (with support for BFloat16).
- Added symbolic shape support for Triton codegen (see
[PR](https://togithub.com/microsoft/onnxruntime/pull/18317)).
- Made improvements to recompute optimizer with easy ON/OFF to allow
layer-wise recompute (see
[PR](https://togithub.com/microsoft/onnxruntime/pull/18566)).
- Enabled memory-efficient gradient management. For Mistral, we see
~10GB drop in memory consumption when this feature is ON (see
[PR](https://togithub.com/microsoft/onnxruntime/pull/18924)).
-   Enabled embedding sparsity optimizations.
- Added support for Aten efficient attention and Triton Flash Attention
(see [PR](https://togithub.com/microsoft/onnxruntime/pull/17959)).
-   Packages now available for CUDA 11.8 and 12.1.

#### On Device Training

- On-Device training will now support training on the web. This release
focuses on federated learning and developer exploration scenarios. More
features coming soon in future releases.

### Extensions

- Modified gen_processing_model tokenizer model to output int64,
unifying output datatype of all tokenizers.
- Implemented support for post-processing of YOLO v8 within the Python
extensions package.
- Introduced 'fairseq' flag to enhance compatibility with certain
Hugging Face tokenizers.
- Incorporated 'added_token' attribute into the BPE tokenizer to improve
CodeGen tokenizer functionality.
- Enhanced the SentencePiece tokenizer by integrating token indices into
the output.
- Added support for the custom operator implemented with CUDA kernels,
including two example operators.
- Added more tests on the Hugging Face tokenizer and fixed identified
bugs.

### Known Issues

- The onnxruntime-training package is not yet available in PyPI but can
be accessed in ADO as follows:
python -m pip install cerberus flatbuffers h5py numpy>=1.16.6 onnx
packaging protobuf sympy setuptools>=41.4.0
pip install -i
https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT/pypi/simple/
onnxruntime-training
        pip install torch-ort
        python -m torch_ort.configure
Installation instructions can also be accessed
[here](https://onnxruntime.ai/getting-started).
-   For models with int4 kernel only:
- Crash may occur when int4 is applied on Intel CPUs with hybrid core if
the E-cores are disabled in BIOS. Fix is in progress to be patched.
- Performance regression on the int4 kernel on x64 makes the op
following MatMulNBits much slower. Fix is in progress to be patched.
- Current bug in BeamSearch implementation of T5, GPT, and Whisper may
break models under heavy inference load using BeamSearch on CUDA. See
[#&#8203;19345](https://togithub.com/microsoft/onnxruntime/pull/19345).
Fix is in progress to be patched.
- Full support of ONNX 1.15 opsets is still in progress. A list of new
ONNX 1.15 opset support that has been included in this release can be
found above in the 'General' section.
- Some Cast nodes will not be removed (see
[microsoft/onnxruntime#17953):
Cast node from higher precision to lower precision (like fp32 to fp16)
will be kept. If model result is different from ORT 1.16 and 1.17, check
whether some Cast nodes was removed in 1.16 but kept in 1.17.

### Contributions

Contributors to ONNX Runtime include members across teams at Microsoft,
along with our community members:
[snnn](https://togithub.com/snnn),
[fs-eire](https://togithub.com/fs-eire),
[tianleiwu](https://togithub.com/tianleiwu),
[mszhanyi](https://togithub.com/mszhanyi),
[edgchen1](https://togithub.com/edgchen1),
[skottmckay](https://togithub.com/skottmckay),
[jchen351](https://togithub.com/jchen351),
[adrianlizarraga](https://togithub.com/adrianlizarraga),
[qjia7](https://togithub.com/qjia7),
[Honry](https://togithub.com/Honry),
[HectorSVC](https://togithub.com/HectorSVC),
[chilo-ms](https://togithub.com/chilo-ms),
[axinging](https://togithub.com/axinging),
[jeffbloo](https://togithub.com/jeffbloo),
[pengwa](https://togithub.com/pengwa),
[yuslepukhin](https://togithub.com/yuslepukhin),
[guschmue](https://togithub.com/guschmue),
[satyajandhyala](https://togithub.com/satyajandhyala),
[xadupre](https://togithub.com/xadupre),
[RandyShuai](https://togithub.com/RandyShuai),
[PeixuanZuo](https://togithub.com/PeixuanZuo),
[RandySheriffH](https://togithub.com/RandySheriffH),
[er3x3](https://togithub.com/er3x3),
[wschin](https://togithub.com/wschin),
[yf711](https://togithub.com/yf711),
[PatriceVignola](https://togithub.com/PatriceVignola),
[askhade](https://togithub.com/askhade),
[smk2007](https://togithub.com/smk2007),
[natke](https://togithub.com/natke),
[kunal-vaishnavi](https://togithub.com/kunal-vaishnavi),
[YUNQIUGUO](https://togithub.com/YUNQIUGUO),
[liqunfu](https://togithub.com/liqunfu),
[cloudhan](https://togithub.com/cloudhan),
[wangyems](https://togithub.com/wangyems),
[yufenglee](https://togithub.com/yufenglee),
[ajindal1](https://togithub.com/ajindal1),
[baijumeswani](https://togithub.com/baijumeswani)
[justinchuby](https://togithub.com/justinchuby),
[Craigacp](https://togithub.com/Craigacp),
[wejoncy](https://togithub.com/wejoncy),
[jywu-msft](https://togithub.com/jywu-msft),
[hariharans29](https://togithub.com/hariharans29),
[nums11](https://togithub.com/nums11),
[jslhcl](https://togithub.com/jslhcl),
[jeffdaily](https://togithub.com/jeffdaily),
[chenfucn](https://togithub.com/chenfucn),
[zhijxu-MS](https://togithub.com/zhijxu-MS),
[mindest](https://togithub.com/mindest),
[BowenBao](https://togithub.com/BowenBao),
[sumitsays](https://togithub.com/sumitsays),
[prasanthpul](https://togithub.com/prasanthpul),
[fdwr](https://togithub.com/fdwr),
[pranavsharma](https://togithub.com/pranavsharma),
[chentaMS](https://togithub.com/chentaMS),
[zhangxiang1993](https://togithub.com/zhangxiang1993),
[souptc](https://togithub.com/souptc),
[zhanghuanrong](https://togithub.com/zhanghuanrong),
[faxu](https://togithub.com/faxu),
[georgen117](https://togithub.com/georgen117),
[sfatimar](https://togithub.com/sfatimar),
[thiagocrepaldi](https://togithub.com/thiagocrepaldi),
[adityagoel4512](https://togithub.com/adityagoel4512),
[ivberg](https://togithub.com/ivberg),
[sophies927](https://togithub.com/sophies927)

**NOTE: Please let us know via [this GitHub
issue](https://togithub.com/microsoft/onnxruntime/issues/19444) if you
contributed to this release but your name is missing from this list, and
we will add you manually!**

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Enabled.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Mend
Renovate](https://www.mend.io/free-developer-tools/renovate/). View
repository job log
[here](https://developer.mend.io/github/ziyadedher/ziyadedher).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy4xNTMuMiIsInVwZGF0ZWRJblZlciI6IjM3LjIxMi4wIiwidGFyZ2V0QnJhbmNoIjoibWFpbiJ9-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
kleiti pushed a commit to kleiti/onnxruntime that referenced this pull request Mar 22, 2024
The `RemoveDuplicateCastTransformer` fairly naively removed Cast nodes
from the graph without considering precision loss when using the same
`TypeGroup`. For instance, F64 -> F32 -> F64 would be optimised out of
the graph.

I also noticed that signedness was not accounted for, which is not
covered by any existing issue but is a problem. For example doing int ->
unsigned int -> int produces very different values for negative inputs
and so should not be optimised out

One could argue that we shouldn't be performing such cast elimination at
all (at least not in this transformer). The original scope might be well
restricted to only eliminating unnecessary casts from the
`InsertCastTransformer` and no others.

### Motivation and Context
This should fix microsoft#17565,
ttps://github.com/microsoft/issues/9915 and
microsoft#8787.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Graph optimization removes significant casts even if all optimizations are disabled
3 participants