Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QDQ removal around Resize (mode=linear) causes wrong numeric values #21319

Open
mgehre-amd opened this issue Jul 11, 2024 · 0 comments
Open

QDQ removal around Resize (mode=linear) causes wrong numeric values #21319

mgehre-amd opened this issue Jul 11, 2024 · 0 comments
Assignees
Labels
quantization issues related to quantization

Comments

@mgehre-amd
Copy link

Describe the issue

The optimization in


removes Q-DQ nodes around Resize and thus makes the Resize compute in int8.
This might be fine for nearest interpolation mode, but in linear interpolation mode it changes the numeric output values.

The mismatch does not happen when using sess_options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_DISABLE_ALL.

To reproduce

#!/usr/bin/env python3
import onnx
import onnx.reference
from numpy import array, float32
import numpy

model = """\
<
  ir_version: 8,
  opset_import: ["" : 19]
>
main_graph (float[4] Concat_output_0, float[1,1,1,2] Abs_output_0) 
=> (float[unk__0,unk__1,unk__2,unk__3] DequantizeLinear_2_output_0) 
{
  Cast_2_output_0 = Constant <value: tensor = int8 {0}> ()
  Cast_3_output_0 = Constant <value: tensor = float {1.0}> ()
  QuantizeLinear_1_output_0 = QuantizeLinear (Abs_output_0, Cast_3_output_0, Cast_2_output_0)
  DequantizeLinear_1_output_0 = DequantizeLinear (QuantizeLinear_1_output_0, Cast_3_output_0, Cast_2_output_0)
  Resize_output_0 = Resize <mode: string = "linear"> (DequantizeLinear_1_output_0, , Concat_output_0)
  Cast_4_output_0 = Constant <value: tensor = int8 {0}> ()
  Cast_5_output_0 = Constant <value: tensor = float {1.0}> ()
  QuantizeLinear_2_output_0 = QuantizeLinear (Resize_output_0, Cast_5_output_0, Cast_4_output_0)
  DequantizeLinear_2_output_0 = DequantizeLinear (QuantizeLinear_2_output_0, Cast_5_output_0, Cast_4_output_0)
}

"""

m = onnx.parser.parse_model(model)

inputs = {'Concat_output_0': array([1., 1., 1., 2.], dtype=float32), 
         'Abs_output_0': array([[[[0.0 , 1.0]]]],dtype=float32)}

out = onnx.reference.ReferenceEvaluator(m).run(None, inputs)[0]
print("onnx.reference", out)

import onnxruntime
sess_options = onnxruntime.SessionOptions()
#sess_options.log_severity_level = 0
# Uncommenting the next line fixes the numeric mismatches
# sess_options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_DISABLE_ALL
sess_options.optimized_model_filepath = "opt.onnx"
onnxref = onnxruntime.InferenceSession(m.SerializeToString(), sess_options).run(None, inputs)[0]

print("onnxruntime", onnxref)

numpy.testing.assert_allclose(out, onnxref, rtol=1e-3, atol=1e-3)

shows a numeric mismatch:

onnx.reference [[[[0. 0. 1. 1.]]]]
onnxruntime [[[[0. 0. 0. 1.]]]]
AssertionError: 
Not equal to tolerance rtol=0.001, atol=0.001

If you look at opt.onnx, you see that the model has been transformed into
image
which is invalid because the linear interpolation cannot be done correctly in int8.

Urgency

No response

Platform

Linux

OS Version

Ubuntu 22.04

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.18.1

ONNX Runtime API

Python

Architecture

X64

Execution Provider

Default CPU

Execution Provider Library Version

No response

@yufenglee yufenglee added the quantization issues related to quantization label Jul 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
quantization issues related to quantization
Projects
None yet
Development

No branches or pull requests

3 participants