Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TFLite Model Creation #232

Closed
vdaita opened this issue Jun 29, 2020 · 21 comments
Closed

TFLite Model Creation #232

vdaita opened this issue Jun 29, 2020 · 21 comments
Labels

Comments

@vdaita
Copy link

vdaita commented Jun 29, 2020

I used ONNX to convert the .onnx file to a .pb file. However, I can't convert it to a .tflite model because I don't know what the input and output arrays are. I tried to take a look at the pytorch model using Netron, but still don't have any luck with that. Any recommendations?

@glenn-jocher
Copy link
Member

This is something on our TODO list, so I can't offer you any advice at the moment. Can you upload your current conversions script here?

@Misterrendal
Copy link

Misterrendal commented Jun 30, 2020

Export to onnx

model.model[-1].export = True 
output = model(img) 
torch.onnx.export(model, 
                  img, 
                  f, 
                  verbose=False, 
                  opset_version=10, 
                  export_params=True,
                  input_names=['images'],
                  keep_initializers_as_inputs=True,
                  output_names=['output_0', 'output_1', 'output_2'])

Export model as .pb file

from onnx_tf.backend import prepare
import onnx

model_onnx = onnx.load('tflite/saved_model.onnx')
tf_rep = prepare(model_onnx, device='cpu')
tf_rep.export_graph('tflite/saved_model.pb')

Read frozen weights

def wrap_frozen_graph(graph_def, inputs, outputs):
    def _imports_graph_def():
        tf.compat.v1.import_graph_def(graph_def, name="")

    wrapped_import = tf.compat.v1.wrap_function(_imports_graph_def, [])
    import_graph = wrapped_import.graph

    return wrapped_import.prune(
        tf.nest.map_structure(import_graph.as_graph_element, inputs),
        tf.nest.map_structure(import_graph.as_graph_element, outputs))

with tf.io.gfile.GFile("tflite/saved_model.pb", "rb") as f:
    graph_def = tf.compat.v1.GraphDef()
    loaded = graph_def.ParseFromString(f.read())

Wrap frozen graph to ConcreteFunctions

frozen_func = wrap_frozen_graph(graph_def=graph_def,
                                inputs=["images:0"],
                                outputs=['output_0:0','output_1:0', 'output_2:0'])

And convert to tflite model

converter = tf.lite.TFLiteConverter.from_concrete_functions([frozen_func])
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tf_lite_model = converter.convert()
open('tflite/yolov5s.tflite', 'wb').write(tf_lite_model)

@Misterrendal
Copy link

I could not convert the model to tflite using ops version 11

@batrlatom
Copy link
Contributor

batrlatom commented Jul 19, 2020

@Misterrendal have you succeed?

@daggarwal01
Copy link

@Misterrendal did that solution worked out for you?

@keesschollaart81
Copy link

I was able to use @Misterrendal code to create a tflite version!

I had to replace this line with Misterrendal's onnx export statement torch.onnx.export(model, ...) (with the correct input/output names).

Then run the ONNX export as described in #251. This results in a .pb file. Then I pasted the other 4 code blocks in a .py file, I installed some dependencies like pip3 install tensorflow-gpu==2.2.0 and pip install git+https://github.com/onnx/onnx-tensorflow.git.

@batrlatom
Copy link
Contributor

batrlatom commented Sep 6, 2020

@keesschollaart81 Were you able to successfully run converted tflite model? I have tried @Misterrendal code to convert onnx model to tflite with success, but when trying to infer the model, I am getting segfault.

The code for prediction I have used:


import numpy as np
import tensorflow as tf

interpreter = tf.lite.Interpreter(model_path="yolov5s.tflite")
interpreter.allocate_tensors()

input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)

interpreter.invoke()

output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)

@zldrobit
Copy link
Contributor

@batrlatom, maybe you could try #959.
This PR exports yolov5 to both TensorFlow and TFLite models with TF 2.3.

@batrlatom
Copy link
Contributor

batrlatom commented Sep 13, 2020

@zldrobit looks like it works. Thanks!

@Kar1s
Copy link

Kar1s commented Sep 14, 2020

@zldrobit tested it as well, it works, thanks

@JimBratsos
Copy link

JimBratsos commented Sep 14, 2020

I am having a problem with the models module. When I tried pip install I ran to a number of problems. After some searching I found out the package was renamed to doqu, but when I tried using it, it pops up an error about missing module named document_base. Does anyone know a workaround with @zldrobit's PR? Thanks for the great work. Also, is the result of the conversion int8 quantized?

@batrlatom
Copy link
Contributor

batrlatom commented Sep 16, 2020

@zldrobit Do you think that you could make a complete inference code with nms in tf? So people without deeper knowledge of tf can just use your code in tflite or tfjs? btw, I was able to export your saved_model into tfjs, which is great!

@zldrobit
Copy link
Contributor

@JimBratsos the result is not int 8 quantized.
@batrlatom Thanks :D
I will add tf nms in models/tf.py with an argument, but I am busy with TFLite's GPU delegate problem.
I hope the problem can be solved soon and I can have time to add nms feature.

@ntlex
Copy link

ntlex commented Oct 16, 2020

@zldrobit Any news on the progress of nms for tensorflow?

@Jacobsolawetz
Copy link
Contributor

@zldrobit Do you think that you could make a complete inference code with nms in tf? So people without deeper knowledge of tf can just use your code in tflite or tfjs? btw, I was able to export your saved_model into tfjs, which is great!

@batrlatom did you find success running in tfjs? I have made the conversion but seeing many of the confidences are off (around 0 or 1). I'm wondering I made a quantization error along the way. Did you run into anything like this?

@Jacobsolawetz
Copy link
Contributor

If anyone encounters what I had above, the problem was not preprocessing the tfjs image correctly, I needed to scale pixel values between [0,1]

@batrlatom
Copy link
Contributor

@Jacobsolawetz Hi, unfortunately I was not able to make nms work so I waiting if @zldrobit would make it avaliable

@zldrobit
Copy link
Contributor

zldrobit commented Nov 3, 2020

@batrlatom @ntlex I have pushed new commit to support adding nms in SavedModel and GraphDef https://github.com/zldrobit/yolov5/tree/tf-android.
Try using

PYTHONPATH=. python3  models/tf.py --img 640 --weight weights/yolov5s.pt --cfg models/yolov5s.yaml --tf-nms

to export SavedModel and GraphDef, and detect objects with one of

python3 detect.py --img 640 --weight weights/yolov5s.pb --no-tf-nms
python3 detect.py --img 640 --weight weights/yolov5s_saved_model --no-tf-nms

If you have further questions, plz reply in #1127, so I won't miss it :D

@github-actions
Copy link
Contributor

github-actions bot commented Dec 4, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the Stale label Dec 4, 2020
@github-actions github-actions bot closed this as completed Dec 9, 2020
@dewball345
Copy link

If anyone encounters what I had above, the problem was not preprocessing the tfjs image correctly, I needed to scale pixel values between [0,1]

How were you even able to do inference in tensorflow.js. I'm getting Cannot read property 'name' of undefined. What was the code you used to do inference? How did you export the model to tf.js? I converted from pytorch to onnx to pb to tensorflow.js, and I got the inference to work on the pb model but not in javascript.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests