Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add TensorFlow formats to export.py #4479

Merged
merged 50 commits into from
Sep 12, 2021
Merged

Add TensorFlow formats to export.py #4479

merged 50 commits into from
Sep 12, 2021

Conversation

glenn-jocher
Copy link
Member

@glenn-jocher glenn-jocher commented Aug 18, 2021

@zldrobit this my main PR for migrating TensorFlow export support to export.py following #1127, while retaining the Keras model-building functionality you created in tf.py.

Export a YOLOv5 PyTorch model to TorchScript, ONNX, CoreML, TensorFlow (saved_model, pb, TFLite, TF.js,) formats
TensorFlow exports authored by https://github.com/zldrobit

Usage:

$ python path/to/export.py --weights yolov5s.pt --include torchscript onnx coreml saved_model pb tflite tfjs

Inference:

$ python path/to/detect.py --weights yolov5s.pt
                                     yolov5s.onnx  (must export with --dynamic)
                                     yolov5s_saved_model
                                     yolov5s.pb
                                     yolov5s.tflite

TensorFlow.js:

$ python path/to/export.py --weights yolov5s.pt --include tfjs
$ # Edit yolov5s_web_model/model.json to sort Identity* in ascending order
$ cd .. && git clone https://github.com/zldrobit/tfjs-yolov5-example.git && cd tfjs-yolov5-example
$ npm install
$ ln -s ../../yolov5/yolov5s_web_model public/yolov5s_web_model
$ npm start

πŸ› οΈ PR Summary

Made with ❀️ by Ultralytics Actions

🌟 Summary

This PR updates the Ultralytics YOLOv5 repository with new export capabilities and various adjustments.

πŸ“Š Key Changes

  • Added *_web_model/ to .dockerignore and .gitignore, suggesting new support for web models.
  • Updated dependencies for export functionality, replacing onnx-simplifier and coremltools with tensorflow-cpu.
  • Introduced changes in CI testing to accommodate new export pipelines.
  • Modified detect.py and export.py for stronger TensorFlow integration and simplifications within the export process.
  • Improved the tf.py TensorFlow model script with new functionalities for exporting models, removing redundant code, and introducing TF.js export support.
  • Added new models (e.g., TFBN, TFConv, etc.) that seem to correspond to TensorFlow/Keras equivalents of existing YOLOv5 layers.
  • Simplified the requirements in requirements.txt, highlighting the shift towards TensorFlow-based operations.

🎯 Purpose & Impact

  • πŸš€ Enhanced Export Options: Users can now export their YOLOv5 models to a variety of TensorFlow formats, facilitating integration into web applications and providing a better pathway for model deployment on various platforms.
  • πŸ›  Improved Codebase: By adding support for TensorFlow.js and refactoring the TensorFlow converting script, the codebase becomes more versatile and user-friendly for those familiar with the TF ecosystem.
  • ☁️ Easier Cloud Integration: The changes in .github/workflows/ci-testing.yml imply that CI now includes tests for TensorFlow export functionality, which ensures reliability for cloud-based applications.
  • ⚑ Streamlined Workflows: Removing dependencies on onnx-simplifier and coremltools may lower the barrier to entry for some users and streamline the model export process, focusing more on TensorFlow-centric workflows.

@glenn-jocher glenn-jocher self-assigned this Aug 18, 2021
@glenn-jocher glenn-jocher mentioned this pull request Sep 10, 2021
1 task
@glenn-jocher
Copy link
Member Author

@zldrobit this PR is almost done! I'm understanding a lot more about TF export after doing all of this, and I had a few quick questions:

  • Fusing YOLOv5 models before TF export. I can't get this to work at all with TF exports. The rest of the exports work well with or without fusing (i.e. ONNX and CoreML models export correctly either way). TF fused export does not throw an error, but detect.py does not detect anything with any fused TF models (pb, tflite, saved_model). I thought I had accounted for fused Conv() layers in this line here. Do you know what the problem might be? It would be advantageous (for faster inference and smaller size) to be able to export fused TF YOLOv5 models.

    self.bn = tf_BN(w.bn) if hasattr(w, 'bn') else tf.identity

  • Trainable Params. I think we want zero trainable parameters during the export process. I assume these have gradients enabled. Do you know how to disable these in TF before export?

==================================================================================================
Total params: 7,295,869
Trainable params: 7,276,605
Non-trainable params: 19,264
__________________________________________________________________________________________________
  • untraced functions. During saved_model export I get this warning, not sure if it's important or not:
Found untraced functions such as tf__conv_layer_call_fn, tf__conv_layer_call_and_return_conditional_losses, tf_bn_1_layer_call_fn, tf_bn_1_layer_call_and_return_conditional_losses, tf__conv_2_layer_call_fn while saving (showing 5 of 845). These functions will not be directly callable after loading.
  • yolov5s-fp16.tflite seems to be in FP8 or INT8. When I look at the filesizes of yolov5s-fp16.tflite and yolov5s-int8.tflite they are both about 7.7 MB, about 50% the size of yolov5s.pt, which is saved in FP16 by default. So it seems both TFLite models are already quantized to 8 bits, and the 'fp16' label is incorrect?

Screenshot 2021-09-11 at 18 41 13

@glenn-jocher glenn-jocher merged commit c3a93d7 into master Sep 12, 2021
@glenn-jocher glenn-jocher deleted the update/tf_export branch September 12, 2021 13:52
@zldrobit
Copy link
Contributor

@glenn-jocher I did some tests to verify the solutions for the following questions:

  • Fusing YOLOv5 models before TF export. This is because in the construction of conv,
    bias_initilizer is set to zeros by default (https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D). To successfully convert a fused model,

    yolov5/models/tf.py

    Lines 95 to 97 in cd810c8

    conv = keras.layers.Conv2D(
    c2, k, s, 'SAME' if s == 1 else 'VALID', use_bias=False,
    kernel_initializer=keras.initializers.Constant(w.conv.weight.permute(2, 3, 1, 0).numpy()))

    has to be changed to
conv = keras.layers.Conv2D(                                                                            
    c2, k, s, 'SAME' if s == 1 else 'VALID', use_bias=False if hasattr(w, 'bn') else True,             
    kernel_initializer=keras.initializers.Constant(w.conv.weight.permute(2, 3, 1, 0).numpy()),         
    bias_initializer='zeros' if hasattr(w, 'bn') else keras.initializers.Constant(w.conv.bias.numpy()))
         keras_model = keras.Model(inputs=inputs, outputs=outputs)
         keras_model.trainable = False                            
         keras_model.summary()                                    
         keras_model.save(f, save_format='tf')                    
converter.target_spec.supported_types = [tf.float16]

in export_tflite() of export.py. Note that the inference time per image on an i9 CPU reduced significantly from ~4s (320x320 input) to less than 400ms (640x640 input) after adding this line.

For convenience, I'll send an PR to address these issues.

@glenn-jocher
Copy link
Member Author

glenn-jocher commented Sep 16, 2021

@zldrobit wow thanks for the investigation! That's awesome you found a fuse solution. I'll take a deeper look later but to give you a quick update, tflite export (and tfjs exports seem to be working well now via export.py, usage is like this:)

python export.py --weights yolov5s.pt --include tflite  # produces yolov5s.tflite which was previously labelled as fp16
python export.py --weights yolov5s.pt --include tflite --int8  # produces yolov5s-int8.tflite using dataset generator
python export.py --weights yolov5s.pt --include tfjs  # produces yolov5s_web_model

EDIT: BTW I was not able to pass all parameters to all places in the TF model, agnostic NMS for example now uses hard coded NMS parameters, but I plan to try to fix this in the future. Previously these parameters were global variables from opt, now they need to be passed in as local variables.

@zldrobit
Copy link
Contributor

@glenn-jocher I found a solution to pass all parameters to the TF NMS operation in #4905. That PR also addresses the Identity_* reordering problem by regex substitution when exporting tfjs models, so no manual operation is needed.

@glenn-jocher
Copy link
Member Author

@zldrobit nice work! I've merged #4905 now.

CesarBazanAV pushed a commit to CesarBazanAV/yolov5 that referenced this pull request Sep 29, 2021
* Initial commit

* Remove unused export_torchscript return

* ROOT variable

* Add prefix to fcn arg

* fix ROOT

* check_yaml into run()

* interim fixes

* imgsz=(320, 320)

* Hardcode tf_raw_resize False

* Finish opt elimination

* Update representative_dataset_gen()

* Update export.py with TF methods

* SiLU and GraphDef fixes

* file_size() directory handling feature

* export fixes

* add lambda: to representative_dataset

* Detect training False default

* Fuse false for TF models

* Embed agnostic NMS arguments

* Remove lambda

* TensorFlow.js export success

* Add pb to Usage

* Add *_tfjs_model/ to ignore files

* prepend YOLOv5 to function headers

* Remove end --- comments

* parameterize tfjs export pb file

* update run() data default /ROOT

* update --include help

* update imports

* return ct_model

* Consolidate TFLite export

* pb prerequisite to tfjs

* TF modules CamelCase

* Remove exports from tf.py and cleanup

* pass agnostic NMS arguments

* CI

* CI

* ignore *_web_model/

* Add tensorflow to CI dependencies

* CI tensorflow-cpu

* Update requirements.txt

* Remove tensorflow check_requirement

* CI coreml tfjs

* export only onnx torchscript

* reorder exports torchscript first
BjarneKuehl pushed a commit to fhkiel-mlaip/yolov5 that referenced this pull request Aug 26, 2022
* Initial commit

* Remove unused export_torchscript return

* ROOT variable

* Add prefix to fcn arg

* fix ROOT

* check_yaml into run()

* interim fixes

* imgsz=(320, 320)

* Hardcode tf_raw_resize False

* Finish opt elimination

* Update representative_dataset_gen()

* Update export.py with TF methods

* SiLU and GraphDef fixes

* file_size() directory handling feature

* export fixes

* add lambda: to representative_dataset

* Detect training False default

* Fuse false for TF models

* Embed agnostic NMS arguments

* Remove lambda

* TensorFlow.js export success

* Add pb to Usage

* Add *_tfjs_model/ to ignore files

* prepend YOLOv5 to function headers

* Remove end --- comments

* parameterize tfjs export pb file

* update run() data default /ROOT

* update --include help

* update imports

* return ct_model

* Consolidate TFLite export

* pb prerequisite to tfjs

* TF modules CamelCase

* Remove exports from tf.py and cleanup

* pass agnostic NMS arguments

* CI

* CI

* ignore *_web_model/

* Add tensorflow to CI dependencies

* CI tensorflow-cpu

* Update requirements.txt

* Remove tensorflow check_requirement

* CI coreml tfjs

* export only onnx torchscript

* reorder exports torchscript first
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants