-
-
Notifications
You must be signed in to change notification settings - Fork 15.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TFLite Model Creation #232
Comments
This is something on our TODO list, so I can't offer you any advice at the moment. Can you upload your current conversions script here? |
Export to onnx
Export model as .pb file
Read frozen weights
Wrap frozen graph to ConcreteFunctions
And convert to tflite model
|
I could not convert the model to tflite using ops version 11 |
@Misterrendal have you succeed? |
@Misterrendal did that solution worked out for you? |
I was able to use @Misterrendal code to create a tflite version! I had to replace this line with Misterrendal's onnx export statement Then run the ONNX export as described in #251. This results in a .pb file. Then I pasted the other 4 code blocks in a .py file, I installed some dependencies like |
@keesschollaart81 Were you able to successfully run converted tflite model? I have tried @Misterrendal code to convert onnx model to tflite with success, but when trying to infer the model, I am getting segfault. The code for prediction I have used:
|
@batrlatom, maybe you could try #959. |
@zldrobit looks like it works. Thanks! |
@zldrobit tested it as well, it works, thanks |
I am having a problem with the models module. When I tried pip install I ran to a number of problems. After some searching I found out the package was renamed to doqu, but when I tried using it, it pops up an error about missing module named document_base. Does anyone know a workaround with @zldrobit's PR? Thanks for the great work. Also, is the result of the conversion int8 quantized? |
@zldrobit Do you think that you could make a complete inference code with nms in tf? So people without deeper knowledge of tf can just use your code in tflite or tfjs? btw, I was able to export your saved_model into tfjs, which is great! |
@JimBratsos the result is not int 8 quantized. |
@zldrobit Any news on the progress of nms for tensorflow? |
@batrlatom did you find success running in tfjs? I have made the conversion but seeing many of the confidences are off (around 0 or 1). I'm wondering I made a quantization error along the way. Did you run into anything like this? |
If anyone encounters what I had above, the problem was not preprocessing the tfjs image correctly, I needed to scale pixel values between [0,1] |
@Jacobsolawetz Hi, unfortunately I was not able to make nms work so I waiting if @zldrobit would make it avaliable |
@batrlatom @ntlex I have pushed new commit to support adding nms in SavedModel and GraphDef https://github.com/zldrobit/yolov5/tree/tf-android.
to export SavedModel and GraphDef, and detect objects with one of
If you have further questions, plz reply in #1127, so I won't miss it :D |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
How were you even able to do inference in tensorflow.js. I'm getting |
I used ONNX to convert the .onnx file to a .pb file. However, I can't convert it to a .tflite model because I don't know what the input and output arrays are. I tried to take a look at the pytorch model using Netron, but still don't have any luck with that. Any recommendations?
The text was updated successfully, but these errors were encountered: