-
-
Notifications
You must be signed in to change notification settings - Fork 15.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use Yolov5 tflite? #1090
Comments
Hello @ngotra2710, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook , Docker Image, and Google Cloud Quickstart Guide for example environments. If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom model or data training question, please note Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:
For more information please visit https://www.ultralytics.com. |
@ngotra2710 #959 may be of use to you. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
@ngotra2710 when you convert to .onnx, You lost the operations to do sigmoid and rectify values according to anchors. Try implement those steps following yolo.py->class Detect->forward(). On the other hand, could you share how you convert .onnx to .pb? I tried using https://github.com/onnx/onnx-tensorflow to do this, but I got this error: |
❔Question
How to set the input data before feed it into the tflite interpreter to get the correct output?
Additional context
Currently, I am trying to convert pytorch model file to tflite file. I has already successfully converted.
The tflite file and the input output details are in the zip file.
best_v2.zip
This is my code when I try to feed the input into the model
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
if input_details[0]['dtype'] == np.float32:
floating_model = True
height = input_details[0]['shape'][2]
width = input_details[0]['shape'][3]
frame_resized = cv2.resize(frame_rgb, (width, height))
input_data = frame_resized if floating_model:
# input_data = (np.float32(input_data) - 127.5) / 127.5
input_data = (np.float32(input_data) / 255.0)
input_data = np.reshape(np.expand_dims(input_data, 0), (input_details[0]['shape']))
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
reshape_output_data = np.reshape(output_data, (output_data.shape[0],
output_data.shape[1]*output_data.shape[2]*output_data.shape[3], output_data.shape[4]))
When I try to "0 - 255 to 0.0 - 1.0" I got this output result:
[[[ 2.1864 0.77372 1.0798 0.87739 -11.606 3.5788]
[ 1.0578 0.83459 1.5088 0.64846 -10.49 3.8019]
[ 0.39111 0.40996 2.2016 0.33661 -12.224 3.9996]
...
[ 0.11459 -1.668 0.43695 -0.27363 -12.781 4.5359]
[ -0.88718 -1.2919 -0.11859 -0.20273 -14.401 4.745]
[ -0.87244 -0.45423 -0.1573 -0.03495 -16.105 4.8643]]]
What I want is: (same with the result in detect.py)
[tensor([[ 65.62970, 234.14124, 154.66338, 319.21924, 0.95002, 0.00000],
[425.51935, 203.04468, 528.26880, 267.44305, 0.93659, 0.00000],
[326.86328, 99.11230, 455.14697, 209.28619, 0.91681, 0.00000]])]
Can someone know about how to use the tflite, please explain it for me?
I am thinking that maybe my tflite was not converted correctly but I can not tell. Because I just follow step-by-step with the command line. (pt -> onnx -> pb ->tflite)
The text was updated successfully, but these errors were encountered: