Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add warning about future computation change for ConvTranspose with auto_pad #11984

Merged
merged 8 commits into from
Jun 29, 2022
10 changes: 9 additions & 1 deletion onnxruntime/core/providers/cpu/nn/conv_transpose.h
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,15 @@ namespace onnxruntime {
template <typename T>
class ConvTranspose : public OpKernel {
public:
ConvTranspose(const OpKernelInfo& info) : OpKernel(info), conv_transpose_attrs_(info) {}
ConvTranspose(const OpKernelInfo& info) : OpKernel(info), conv_transpose_attrs_(info) {
if (conv_transpose_attrs_.auto_pad == AutoPadType::SAME_UPPER ||
conv_transpose_attrs_.auto_pad == AutoPadType::SAME_LOWER) {
// TODO(jcwchen): #9740 ORT 1.13 will correct the logic by switching them to meet ONNX spec
LOGS_DEFAULT(WARNING) << "The existing bug in the padding distribution for auto_pad type"
<< " SAME_UPPER/SAME_LOWER will be fixed in next ORT 1.13 release and hence the"
<< " results of ConvTranspose operator using the above auto_pad type(s) will be different.";
}
}

Status PrePack(const Tensor& tensor, int input_idx, AllocatorPtr alloc,
/*out*/ bool& is_packed,
Expand Down