-
Notifications
You must be signed in to change notification settings - Fork 45.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tflite:Error in post process calculation #7451
Comments
Thank you for your post. We noticed you have not filled out the following field in the issue template. Could you update them if they are relevant in your case, or leave them as N/A? Thanks. |
Thank you for your attention. I have added some additional environmental information and my tflite file. Please check whether it is enough |
We would probably need more information about your model. Was it trained with quantization? Or is it a floating point model? |
i trained with quantization. However, I have tested both the quantitative training and floating point, and the post-processing results are wrong. The official API used to detect the model, I wonder what the problem is
发自我的iPhone
…------------------ Original ------------------
From: Sachin Joglekar <notifications@github.com>
Date: Sat,Sep 14,2019 8:09 AM
To: tensorflow/models <models@noreply.github.com>
Cc: ai-libo <1575899134@qq.com>, Author <author@noreply.github.com>
Subject: Re: [tensorflow/models] Tflite:Error in post process calculation (#7451)
We would probably need more information about your model. Was it trained with quantization? Or is it a floating point model?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Same problem. |
Sorry about the late reply :-). Can you remove the |
System information
Model_Task:models-master/research/object_detection
MODEL_name : mobilenetV2_ssdlite_fakequantization
OS:centos 7
== check python ===================================================
python version: 3.6.6
python compiler version: GCC 4.8.5 20150623 (Red Hat 4.8.5-36)
python implementation: CPython
== check os platform ===============================================
os: Linux
os kernel version: #1 SMP Fri Apr 20 16:44:24 UTC 2018
os release version: 3.10.0-862.el7.x86_64
os platform: Linux-3.10.0-862.el7.x86_64-x86_64-with-centos-7.5.1804-Core
linux distribution: ('CentOS Linux', '7.5.1804', 'Core')
linux os distribution: ('centos', '7.5.1804', 'Core')
uname: uname_result(system='Linux', node='VM_12_5_centos', release='3.10.0-862.el7.x86_64', version='#1 SMP Fri Apr 20 16:44:24 UTC 2018', machine='x86_64', processor='x86_64')
architecture: ('64bit', 'ELF')
machine: x86_64
== are we in docker =============================================
No
== compiler =====================================================
c++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36)
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
== check pips ===================================================
numpy 1.14.5
protobuf 3.7.1
tensorflow-gpu 1.10.0
== check for virtualenv =========================================
False
== tensorflow import ============================================
tf.version.VERSION = 1.10.0
tf.version.COMPILER_VERSION = 4.8.5
Sanity check: array([1], dtype=int32)
== cuda libs ===================================================
/usr/local/python3/lib/python3.6/site-packages/torch/lib/libcudart-f7fdd8d7.so.9.0
/usr/local/cuda-9.0/doc/man/man7/libcudart.so.7
/usr/local/cuda-9.0/doc/man/man7/libcudart.7
/usr/local/cuda-9.0/lib64/libcudart_static.a
/usr/local/cuda-9.0/lib64/libcudart.so.9.0.103
CUDA Version 9.0.103
cudnn version:7.0.5
== tensorflow installed from info ==================
== python version ==============================================
(major, minor, micro, releaselevel, serial)
(3, 6, 6, 'final', 0)
== bazel version ===============================================
Build label: 0.28.1
Build time: Fri Jul 19 00:00:00 2019 (1563494400)
Build timestamp: 1563494400
Build timestamp as int: 1563494400
Describe the problem
First of all, the files frozen_inference_graph.pb I generated on the server side can be correctly reasoned out and properly post-processed
![test_1](https://user-images.githubusercontent.com/26913654/63074635-a9520080-bf60-11e9-929a-dc78cabbd1a5.jpg)
![8d139d5b8d8f48c3d01e77b7150b1c9](https://user-images.githubusercontent.com/26913654/63074754-4dd44280-bf61-11e9-820c-ff552df7bee8.png)
![image](https://user-images.githubusercontent.com/26913654/63075002-8e808b80-bf62-11e9-89ba-8769f7862c89.png)
result:
However, when I generated the tflite file, I found wrong results. After analyzing the conat and concat_1 layer, that is, before post-processing, the calculated results were correct in tflite, but after post-processing, the values would become abnormal.
The script and results are as follows:
toco --graph_def_file=./tflite_graph_300_300.pb --output_file=./detect_300_300_quantize.tflite --input_shapes=1,300,300,3 --input_arrays=normalized_input_image_tensor --output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' --inference_type=QUANTIZED_UINT8 --inference_input_type=QUANTIZED_UINT8 --output_format=TFLITE --dump_graphviz_dir=./ --default_ranges_min=0 --default_ranges_max=6 --mean_values=128 --std_dev_values=127
Meanwhile, I also tried to generate in different versions of tf. Since I used CUDA9.0, before the main test of 1.13.1, even though I used --allow_custom_ops, there would still be" Op type not registered 'TFLite_Detection_PostProcess' in 1.10.0" .Therefore, I upgraded to 1.11.0, 1.12.0 and 1.14.0. The conversion of error resolution was normal, but after positioning, I found that the calculation results of the post-processing All the above were wrong.But it does make sense to run the official demo.
This is a problem with both quantization and floating inference point reasoning, I suspect "--allow_custom_ops" is successfully converted in tflite, but there are still BUGS causing incorrect calculations, so does anyone have this problem?Your comments are welcome
The tflite_test file / test Code
test_file.zip
this is my test code:
`interpreter = tf.contrib.lite.Interpreter(model_path=model_path)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
image_np = cv2.imread(image_path)
b,g,r = np.split(image_np,3,axis = 2)
image_np = np.concatenate((r,g,b),axis = 2)
image_np_org = image_np
image_np = cv2.resize(image_np,(400,300))
image_np = np.reshape(image_np,(1,300,400,3))
image_np_org = image_np
interpreter.set_tensor(input_details[0]['index'], image_np)
#Running inference and timing it
interpreter.invoke()
output_dict['detection_scores']=interpreter.get_tensor(xx)
output_dict['detection_boxes']=interpreter.get_tensor(xx)
`
The text was updated successfully, but these errors were encountered: