Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Split算子onnx和mnn的效果一致性不同 #2337

Closed
ghost opened this issue Apr 15, 2023 · 4 comments
Closed

Split算子onnx和mnn的效果一致性不同 #2337

ghost opened this issue Apr 15, 2023 · 4 comments
Labels
bug Something isn't working

Comments

@ghost
Copy link

ghost commented Apr 15, 2023

问题描述

在转换yolov8模型从onnx->mnn时 用mnn提供的testMNNFromOnnx工具进行一致性验证时发现效果对不上 。

编译版本:

tag 2.4.2

编译方式:

mkdir build
cd build
cmake .. -DMNN_BUILD_CONVERTER=true && make -j4

运行:

python ../tools/script/testMNNFromOnnx.py test.onnx

log:
onnx/test.onnx
tensor(float)
['output0']
inputs:
images
onnx/
outputs:
onnx/output0.txt (1, 2520, 6)
onnx/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:29:50] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:29:50] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:29:50] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ output0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: output0
output0: (1, 2520, 6, )
TESTERROR output0 value error : absMaxV:637.293518 - DiffMax 153.856781
Error for output output0
Save mnn result to .error director

运行:

python ../tools/script/testMNNFromOnnx.py test.onnx DEBUG

log:
Debug Mode: True
onnx/test.onnx
tensor(float)
['/model.3/conv/Conv_output_0']
inputs:
images
onnx/
outputs:
onnx//model.3/conv/Conv_output_0.txt (1, 64, 24, 80)
onnx//model.3/conv/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:30:58] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:30:58] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:30:58] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.3/conv/Conv_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.3/conv/Conv_output_0
/model.3/conv/Conv_output_0: (1, 64, 24, 80, )
TESTERROR /model.3/conv/Conv_output_0 value error : absMaxV:11.974254 - DiffMax 7.639077
Error for output /model.3/conv/Conv_output_0
Save mnn result to .error director

Test Node : /model.3/conv/Conv False
onnx/test.onnx
tensor(float)
['/model.2/cv1/act/Mul_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/cv1/act/Mul_output_0.txt (1, 32, 48, 160)
onnx//model.2/cv1/act/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:01] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:01] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:01] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/cv1/act/Mul_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/cv1/act/Mul_output_0
/model.2/cv1/act/Mul_output_0: (1, 32, 48, 160, )
TEST_SUCCESS

Test Node : /model.2/cv1/act/Mul True
onnx/test.onnx
tensor(float)
['/model.2/Concat_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/Concat_output_0.txt (1, 48, 48, 160)
onnx//model.2/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:05] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:05] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:05] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/Concat_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/Concat_output_0
/model.2/Concat_output_0: (1, 48, 48, 160, )
TESTERROR /model.2/Concat_output_0 value error : absMaxV:27.603397 - DiffMax 25.806040
Error for output /model.2/Concat_output_0
Save mnn result to .error director

Test Node : /model.2/Concat False
onnx/test.onnx
tensor(float)
['/model.2/Split_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/Split_output_0.txt (1, 16, 48, 160)
onnx//model.2/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:08] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:08] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:08] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/Split_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/Split_output_0
/model.2/Split_output_0: (1, 16, 48, 160, )
TEST_SUCCESS

Test Node : /model.2/Split True
Error is between /model.2/Split and /model.2/Concat
onnx/test.onnx
tensor(float)
['/model.2/Split_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/Split_output_0.txt (1, 16, 48, 160)
onnx//model.2/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:11] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:11] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:11] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/Split_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/Split_output_0
/model.2/Split_output_0: (1, 16, 48, 160, )
TEST_SUCCESS

Test Node : /model.2/Split True
onnx/test.onnx
tensor(float)
['/model.2/Split_output_1']
inputs:
images
onnx/
outputs:
onnx//model.2/Split_output_1.txt (1, 16, 48, 160)
onnx//model.2/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:13] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:13] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:13] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/Split_output_1, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/Split_output_1
/model.2/Split_output_1: (1, 16, 48, 160, )
TEST_SUCCESS

Test Node : /model.2/Split True
onnx/test.onnx
tensor(float)
['/model.2/m.0/Add_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/m.0/Add_output_0.txt (1, 16, 48, 160)
onnx//model.2/m.0/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:16] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:16] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:16] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/m.0/Add_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/m.0/Add_output_0
/model.2/m.0/Add_output_0: (1, 16, 48, 160, )
TESTERROR /model.2/m.0/Add_output_0 value error : absMaxV:25.796280 - DiffMax 25.911701
Error for output /model.2/m.0/Add_output_0
Save mnn result to .error director

Test Node : /model.2/m.0/Add False
onnx/test.onnx
tensor(float)
['/model.2/Split_output_1']
inputs:
images
onnx/
outputs:
onnx//model.2/Split_output_1.txt (1, 16, 48, 160)
onnx//model.2/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:19] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:19] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:19] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/Split_output_1, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/Split_output_1
/model.2/Split_output_1: (1, 16, 48, 160, )
TEST_SUCCESS

Test Node : /model.2/Split True
onnx/test.onnx
tensor(float)
['/model.2/m.0/cv2/act/Mul_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/m.0/cv2/act/Mul_output_0.txt (1, 16, 48, 160)
onnx//model.2/m.0/cv2/act/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:22] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:22] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:22] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/m.0/cv2/act/Mul_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/m.0/cv2/act/Mul_output_0
/model.2/m.0/cv2/act/Mul_output_0: (1, 16, 48, 160, )
TESTERROR /model.2/m.0/cv2/act/Mul_output_0 value error : absMaxV:21.914549 - DiffMax 25.643703
Error for output /model.2/m.0/cv2/act/Mul_output_0
Save mnn result to .error director

Test Node : /model.2/m.0/cv2/act/Mul False
onnx/test.onnx
tensor(float)
['/model.2/m.0/cv1/act/Mul_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/m.0/cv1/act/Mul_output_0.txt (1, 16, 48, 160)
onnx//model.2/m.0/cv1/act/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:25] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:25] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:25] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/m.0/cv1/act/Mul_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/m.0/cv1/act/Mul_output_0
/model.2/m.0/cv1/act/Mul_output_0: (1, 16, 48, 160, )
TESTERROR /model.2/m.0/cv1/act/Mul_output_0 value error : absMaxV:19.549574 - DiffMax 19.519550
Error for output /model.2/m.0/cv1/act/Mul_output_0
Save mnn result to .error director

Test Node : /model.2/m.0/cv1/act/Mul False
onnx/test.onnx
tensor(float)
['/model.2/m.0/cv1/conv/Conv_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/m.0/cv1/conv/Conv_output_0.txt (1, 16, 48, 160)
onnx//model.2/m.0/cv1/conv/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:28] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:28] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:28] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/m.0/cv1/conv/Conv_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/m.0/cv1/conv/Conv_output_0
/model.2/m.0/cv1/conv/Conv_output_0: (1, 16, 48, 160, )
TESTERROR /model.2/m.0/cv1/conv/Conv_output_0 value error : absMaxV:32.350361 - DiffMax 37.784775
Error for output /model.2/m.0/cv1/conv/Conv_output_0
Save mnn result to .error director
Test Node : /model.2/m.0/cv1/conv/Conv False
Error is between /model.2/Split and /model.2/m.0/cv1/conv/Conv

test.zip

111d8a9e103ce2ffdb38832d74734ad5

@jxt1234
Copy link
Collaborator

jxt1234 commented Apr 15, 2023

fastblit 优化出错和 onnx 不同版本的 softmax 算子处理有误,修正中

@jxt1234 jxt1234 added the bug Something isn't working label Apr 15, 2023
@jxt1234
Copy link
Collaborator

jxt1234 commented Apr 18, 2023

2.4.3 修正

@jxt1234 jxt1234 closed this as completed Apr 18, 2023
@ghost
Copy link
Author

ghost commented Apr 23, 2023

在我测试的时候发现 只是test文件修改了 但是在实际推理的时候 一致性还是对不上 我这边大概定位到了原因
我把我导出的onnx版本从opset==16改成了opset==11就没有问题了
90019d73ad1ed444ff1185b9553f4c62
说明onnx的协议有新的改动 mnn在转换或者推理的过程中某些算子是不是出现了问题?

@iamjoseph331
Copy link

在我测试的时候发现 只是test文件修改了 但是在实际推理的时候 一致性还是对不上 我这边大概定位到了原因 我把我导出的onnx版本从opset==16改成了opset==11就没有问题了 90019d73ad1ed444ff1185b9553f4c62 说明onnx的协议有新的改动 mnn在转换或者推理的过程中某些算子是不是出现了问题?

感謝大神 試了好多法子只有這個是有用的

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants