Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow int8 quantization for export_tfjs #10948

Merged
merged 3 commits into from
Feb 10, 2023
Merged

Allow int8 quantization for export_tfjs #10948

merged 3 commits into from
Feb 10, 2023

Conversation

davidstrahm
Copy link
Contributor

@davidstrahm davidstrahm commented Feb 10, 2023

--int8 param currently has no effect on export_tfjs. With this change, python export.py --weights ../path/to/best.pt --include tfjs --int8 will add the --quantize_uint8 param to the tensorflowjs_converter script, greatly reducing model size for web usage.

Signed-off-by: David Strahm david.strahm@lambda-it.ch

🛠️ PR Summary

Made with ❤️ by Ultralytics Actions

🌟 Summary

Enhancements in TensorFlow.js export and Docker security.

📊 Key Changes

  • Added optional INT8 quantization support for TensorFlow.js exports.
  • Reorganized Dockerfile for better readability and efficiency.
  • Included security update by explicitly installing OpenSSL.

🎯 Purpose & Impact

  • 🚀 Enhanced Export Options: Users can now export models to TensorFlow.js with optional INT8 quantization, potentially reducing model size and improving execution speed on supported platforms.
  • 📦 Docker Optimization: Streamlining Dockerfile commands for package installations improves build times and readability.
  • 🔐 Increased Security: Explicit installation of OpenSSL responds to known vulnerabilities, thereby securing the Docker environment.

--int8 param currently has no effect on export_tfjs. With this change, 
` python export.py --weights ../path/to/best.pt --include tfjs --int8` will add the --quantize_uint8 param to the tensorflowjs_converter script, greatly reducing model size for web usage.

Signed-off-by: David Strahm <david.strahm@lambda-it.ch>
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👋 Hello @davidstrahm, thank you for submitting a YOLOv5 🚀 PR! To allow your work to be integrated as seamlessly as possible, we advise you to:

  • ✅ Verify your PR is up-to-date with ultralytics/yolov5 master branch. If your PR is behind you can update your code by clicking the 'Update branch' button or by running git pull and git merge master locally.
  • ✅ Verify all YOLOv5 Continuous Integration (CI) checks are passing.
  • ✅ Reduce changes to the absolute minimum required for your bug fix or feature addition. "It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is." — Bruce Lee

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
@glenn-jocher glenn-jocher merged commit d389840 into ultralytics:master Feb 10, 2023
@glenn-jocher
Copy link
Member

@davidstrahm PR is merged. Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐

Smfun12 pushed a commit to Smfun12/yolov5 that referenced this pull request Mar 24, 2023
* Allow int8 quantization for export_tfjs

--int8 param currently has no effect on export_tfjs. With this change, 
` python export.py --weights ../path/to/best.pt --include tfjs --int8` will add the --quantize_uint8 param to the tensorflowjs_converter script, greatly reducing model size for web usage.

Signed-off-by: David Strahm <david.strahm@lambda-it.ch>

* Update Dockerfile

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>

---------

Signed-off-by: David Strahm <david.strahm@lambda-it.ch>
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants