Skip to content
This repository has been archived by the owner on Nov 11, 2023. It is now read-only.

Yolov5-Lite with shuffle block convert to tflie #82

Closed
ppogg opened this issue Nov 19, 2021 · 17 comments
Closed

Yolov5-Lite with shuffle block convert to tflie #82

ppogg opened this issue Nov 19, 2021 · 17 comments
Labels
YOLOv5 Read the README.

Comments

@ppogg
Copy link

ppogg commented Nov 19, 2021

Unbelievable, this warehouse is really great!!!

@PINTO0309
Copy link
Owner

I will not respond unless you provide ONNX or OpenVINO IR. I really dislike sloppy issues.

#77 "Yolov5 with Shuffle_Block model conversion error, ValueError: Dimension size must be evenly divisible by 3 but is 16"

@ppogg ppogg closed this as completed Nov 20, 2021
@ppogg ppogg reopened this Nov 20, 2021
@ppogg
Copy link
Author

ppogg commented Nov 20, 2021

sorry sir,like this:
v5lite-s.xml: https://drive.google.com/file/d/1POkWJaGG8qyebf1PCVWp_fMTmFJ4jl1C/view?usp=sharing
v5lte-s.bin: https://drive.google.com/file/d/1Mq8cFc2rTX1KwFeaSc3bUlVyga1bWl5H/view?usp=sharing
The dataset is coco

@PINTO0309
Copy link
Owner

PINTO0309 commented Nov 20, 2021

There seems to be no post-processing at all, are you sure this is the complete model? This is not a problem at all for conversion, but I just felt that post-processing would be better if the model is to be used for end-to-end.
Screenshot 2021-11-20 12:11:19

@PINTO0309
Copy link
Owner

PINTO0309 commented Nov 20, 2021

Can be converted normally.

xhost +local: && \
docker run --gpus all -it --rm \
-v `pwd`:/home/user/workdir \
-v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
--device /dev/video0:/dev/video0:mwr \
--net=host \
-e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
-e DISPLAY=$DISPLAY \
--privileged \
ghcr.io/pinto0309/openvino2tensorflow:latest

H=640
W=640
MODEL=v5lite_s
openvino2tensorflow \
--model_path saved_model/openvino/FP32/${MODEL}_${H}x${W}.xml \
--output_saved_model \
--output_pb \
--output_no_quant_float32_tflite
  • v5lite-s.tflite
    aaa

@ppogg
Copy link
Author

ppogg commented Nov 20, 2021

My God, can you provide a converted model? I want to see the difference with my tflite after converting.

@PINTO0309
Copy link
Owner

saved_model.zip

@ppogg
Copy link
Author

ppogg commented Nov 20, 2021

Thankyou sir. By the way, I watched this warehouse for a long time last night. It was really practical.

@PINTO0309
Copy link
Owner

Thank you. If models with post-processing are provided, I will convert them all and share them with my model zoo.
https://github.com/PINTO0309/PINTO_model_zoo

@ppogg
Copy link
Author

ppogg commented Nov 20, 2021

Of course sir, this is my honor,like this:
image
This IR model includes post-processing:
https://drive.google.com/file/d/1zvhzWmS_8tQEJPRLWlSbehXT_wRVY45P/view?usp=sharing
image

@ppogg
Copy link
Author

ppogg commented Nov 20, 2021

By the way, the model you just converted for me does not work, but someone used this repo to convert yolov5 before and work: https://github.com/lp6m/yolov5s_android/tree/master/convert_model.
you converted for me:
image
before:
image
I use the same detect.py.
May be is a new problem, thankyou sir~

@PINTO0309
Copy link
Owner

PINTO0309 commented Nov 20, 2021

  • OpenVINO to TensorFlow to ONNX and TFLite, Comparison of inference results.
import onnxruntime
import tensorflow as tf
import time
import numpy as np
from pprint import pprint

H=640
W=640
MODEL='model_float32'

############################################################

onnx_session = onnxruntime.InferenceSession(f'saved_model_{H}x{W}/model_float32.onnx')
input_name = onnx_session.get_inputs()[0].name
output_name = onnx_session.get_outputs()[0].name

roop = 1
e = 0.0
result = None
inp = np.ones((1,3,H,W), dtype=np.float32)
for _ in range(roop):
    s = time.time()
    result = onnx_session.run(
        [output_name],
        {input_name: inp}
    )
    e += (time.time() - s)
print('ONNX output @@@@@@@@@@@@@@@@@@@@@@@
print(f'elapsed time: {e/roop*1000}ms')
print(f'shape: {result[0].shape}')
pprint(result)

############################################################

interpreter = tf.lite.Interpreter(model_path=f'saved_model_{H}x{W}/model_float32.tflite', num_threads=4)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

roop = 1
e = 0.0
result = None
inp = np.ones((1,H,W,3), dtype=np.float32)
for _ in range(roop):
    s = time.time()
    interpreter.set_tensor(input_details[0]['index'], inp)
    interpreter.invoke()
    result = interpreter.get_tensor(output_details[0]['index'])
    e += (time.time() - s)
print('tflite output @@@@@@@@@@@@@@@@@@@@@@@
print(f'elapsed time: {e/roop*1000}ms')
print(f'shape: {result.shape}')
pprint(result)
user@ubuntu2004:~/workdir$ python3 onnx_tflite_test.py 
ONNX output @@@@@@@@@@@@@@@@@@@@@@@
elapsed time: 29.294729232788086ms
shape: (1, 25200, 85)
[array([[[2.51785994e+00, 2.60093212e+00, 1.49863968e+01, ...,
         4.47389483e-03, 1.61233544e-03, 1.08805299e-02],
        [1.17213287e+01, 3.59181929e+00, 2.65871239e+01, ...,
         3.91250849e-03, 1.57558918e-03, 1.05172098e-02],
        [1.90585785e+01, 3.66961241e+00, 3.22659492e+01, ...,
         3.77982855e-03, 1.24087930e-03, 1.00672245e-02],
        ...,
        [5.61473145e+02, 6.06712646e+02, 1.61073944e+02, ...,
         7.55631924e-03, 7.73429871e-04, 1.77630782e-03],
        [5.85584778e+02, 6.07984070e+02, 1.28446884e+02, ...,
         7.52204657e-03, 8.94069672e-04, 2.54270434e-03],
        [6.17867249e+02, 6.16864014e+02, 1.56085968e+02, ...,
         7.01209903e-03, 9.43809748e-04, 2.71356106e-03]]], dtype=float32)]

INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
tflite output @@@@@@@@@@@@@@@@@@@@@@@
elapsed time: 54.502010345458984ms
shape: (1, 25200, 85)
array([[[2.5178699e+00, 2.6009421e+00, 1.4986400e+01, ...,
         4.4739130e-03, 1.6123326e-03, 1.0880618e-02],
        [1.1721323e+01, 3.5918303e+00, 2.6587116e+01, ...,
         3.9125220e-03, 1.5755324e-03, 1.0517325e-02],
        [1.9058573e+01, 3.6696162e+00, 3.2265942e+01, ...,
         3.7798532e-03, 1.2408829e-03, 1.0067256e-02],
        ...,
        [5.6147308e+02, 6.0671265e+02, 1.6107385e+02, ...,
         7.5563113e-03, 7.7344757e-04, 1.7763070e-03],
        [5.8558478e+02, 6.0798407e+02, 1.2844681e+02, ...,
         7.5220633e-03, 8.9407619e-04, 2.5427316e-03],
        [6.1786725e+02, 6.1686395e+02, 1.5608585e+02, ...,
         7.0121274e-03, 9.4381260e-04, 2.7135969e-03]]], dtype=float32)

@ppogg
Copy link
Author

ppogg commented Nov 20, 2021

Oh, I know. May be something I get wrong. Thankyou very much!

@PINTO0309
Copy link
Owner

PINTO0309 commented Nov 20, 2021

Commited.

The input resolution is too large, so I aborted the conversion to EdgeTPU.

@ppogg
Copy link
Author

ppogg commented Nov 20, 2021

I like it, good jobs sir!

@PINTO0309
Copy link
Owner

Since there seems to be no particular progress, I will close it once. This is to list the tasks that I will need to do in the future. Please post another issue when you need it.

@Houangnt
Copy link

sorry sir,like this: v5lite-s.xml: https://drive.google.com/file/d/1POkWJaGG8qyebf1PCVWp_fMTmFJ4jl1C/view?usp=sharing v5lte-s.bin: https://drive.google.com/file/d/1Mq8cFc2rTX1KwFeaSc3bUlVyga1bWl5H/view?usp=sharing The dataset is coco

How to get v5lite.xml from my model ?. I try to put your v5lite-lite.xml and my v5lite-lite.bin and got this error :
image
After that, i put 2 your file xml and bin, i dont get any error. I'm wondering if I have to use my own v5-lite.xml and v5-lite.bin file to convert them but i dont know how to get v5lite.xml.

@PINTO0309
Copy link
Owner

Please ask the author himself.
https://github.com/ppogg/YOLOv5-Lite

Repository owner locked as resolved and limited conversation to collaborators Jan 28, 2022
@PINTO0309 PINTO0309 added the YOLOv5 Read the README. label Aug 31, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
YOLOv5 Read the README.
Projects
None yet
Development

No branches or pull requests

3 participants