Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

export.py - TensorFlow Lite export failure due to EndVector() #10202

Closed
1 task done
HripsimeS opened this issue Nov 18, 2022 · 3 comments
Closed
1 task done

export.py - TensorFlow Lite export failure due to EndVector() #10202

HripsimeS opened this issue Nov 18, 2022 · 3 comments
Labels
question Further information is requested

Comments

@HripsimeS
Copy link

Search before asking

Question

@glenn-jocher Hello. I need to convert yolov5s.pt weight to yolov5s.tflite and save tflite in some folder. I saw that export.py is normally doing that action, but I did not receive the outcome model as tflite format. I run this command:

python export.py --weights yolov5s.pt --include tflite

In weights folder it saved best_saved_model folder with

  1. assets folder which was empty,
  2. variables folder with variables.data-00000-of-00001 and variables.index files;
  3. saved_model.pb file.

Tha last line after running the command above are these
TensorFlow Lite: export failure 58.0s: EndVector() takes 1 positional argument but 2 were given

So the export was failed due to EndVector(), which I could not find in export.py file. Could you please help me to figure out what is the issue and how to fix it in order to get an outcome yolov5s.tflite file. Look forward to hearing soon!

Additional

No response

@HripsimeS HripsimeS added the question Further information is requested label Nov 18, 2022
@HripsimeS HripsimeS changed the title export.py does not convert .pt weight to .tflite export.py - TensorFlow Lite export failure due to EndVector() Nov 18, 2022
@glenn-jocher
Copy link
Member

@HripsimeS 👋 Hello! Thanks for asking about YOLOv5 🚀 benchmarks. YOLOv5 inference is officially supported in 11 formats, and all formats are benchmarked for identical accuracy and to compare speed every 24 hours by the YOLOv5 CI.

Due to these daily benchmarks we can tell that TFLite export is operating correctly and there are no errors there.

💡 ProTip: Export to ONNX or OpenVINO for up to 3x CPU speedup. See CPU Benchmarks.
💡 ProTip: Export to TensorRT for up to 5x GPU speedup. See GPU Benchmarks.

Format export.py --include Model
PyTorch - yolov5s.pt
TorchScript torchscript yolov5s.torchscript
ONNX onnx yolov5s.onnx
OpenVINO openvino yolov5s_openvino_model/
TensorRT engine yolov5s.engine
CoreML coreml yolov5s.mlmodel
TensorFlow SavedModel saved_model yolov5s_saved_model/
TensorFlow GraphDef pb yolov5s.pb
TensorFlow Lite tflite yolov5s.tflite
TensorFlow Edge TPU edgetpu yolov5s_edgetpu.tflite
TensorFlow.js tfjs yolov5s_web_model/

Benchmarks

Benchmarks below run on a Colab Pro with the YOLOv5 tutorial notebook Open In Colab. To reproduce:

python utils/benchmarks.py --weights yolov5s.pt --imgsz 640 --device 0

Colab Pro V100 GPU

benchmarks: weights=/content/yolov5/yolov5s.pt, imgsz=640, batch_size=1, data=/content/yolov5/data/coco128.yaml, device=0, half=False, test=False
Checking setup...
YOLOv5 🚀 v6.1-135-g7926afc torch 1.10.0+cu111 CUDA:0 (Tesla V100-SXM2-16GB, 16160MiB)
Setup complete ✅ (8 CPUs, 51.0 GB RAM, 46.7/166.8 GB disk)

Benchmarks complete (458.07s)
                   Format  mAP@0.5:0.95  Inference time (ms)
0                 PyTorch        0.4623                10.19
1             TorchScript        0.4623                 6.85
2                    ONNX        0.4623                14.63
3                OpenVINO           NaN                  NaN
4                TensorRT        0.4617                 1.89
5                  CoreML           NaN                  NaN
6   TensorFlow SavedModel        0.4623                21.28
7     TensorFlow GraphDef        0.4623                21.22
8         TensorFlow Lite           NaN                  NaN
9     TensorFlow Edge TPU           NaN                  NaN
10          TensorFlow.js           NaN                  NaN

Colab Pro CPU

benchmarks: weights=/content/yolov5/yolov5s.pt, imgsz=640, batch_size=1, data=/content/yolov5/data/coco128.yaml, device=cpu, half=False, test=False
Checking setup...
YOLOv5 🚀 v6.1-135-g7926afc torch 1.10.0+cu111 CPU
Setup complete ✅ (8 CPUs, 51.0 GB RAM, 41.5/166.8 GB disk)

Benchmarks complete (241.20s)
                   Format  mAP@0.5:0.95  Inference time (ms)
0                 PyTorch        0.4623               127.61
1             TorchScript        0.4623               131.23
2                    ONNX        0.4623                69.34
3                OpenVINO        0.4623                66.52
4                TensorRT           NaN                  NaN
5                  CoreML           NaN                  NaN
6   TensorFlow SavedModel        0.4623               123.79
7     TensorFlow GraphDef        0.4623               121.57
8         TensorFlow Lite        0.4623               316.61
9     TensorFlow Edge TPU           NaN                  NaN
10          TensorFlow.js           NaN                  NaN

Good luck 🍀 and let us know if you have any other questions!

@HripsimeS
Copy link
Author

@glenn-jocher thanks for your quick reply, I could fix it with this command line, I will close this issue!
pip install --upgrade flatbuffers==1.12 # downgrade from v2 to v1.12

@glenn-jocher
Copy link
Member

Oh that's strange. We used to install a fixed version of flatbuffers like this but it caused problems with newer versions of TF I think.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants