Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New smart_inference_mode() conditional decorator #8957

Merged
merged 1 commit into from
Aug 13, 2022

Conversation

glenn-jocher
Copy link
Member

@glenn-jocher glenn-jocher commented Aug 13, 2022

Applies torch.inference_mode() decorator if torch>=1.9.0 else torch.no_grad() decorator. Material speed improvements observed in detect.py and val.py.

πŸ› οΈ PR Summary

Made with ❀️ by Ultralytics Actions

🌟 Summary

Integration of a new smart_inference_mode decorator across various Ultralytics YOLOv5 files for improved handling of inference operations.

πŸ“Š Key Changes

  • smart_inference_mode Decorator: A new decorator function, smart_inference_mode, has been created to wrap functions that previously used the torch.no_grad() decorator.
  • Updates in Critical Files: Modifications made in detect.py, export.py, val.py, models/common.py, and models/yolo.py to use the new decorator.
  • EMA Update Function: The Exponential Moving Average (EMA) update function in utils/torch_utils.py now uses smart_inference_mode instead of torch.no_grad().
  • Minor Refactoring: The _make_grid function in models/yolo.py now accepts an additional parameter to handle torch version compatibility.

🎯 Purpose & Impact

  • Enhanced Inference Performance: The change transitions to torch.inference_mode for PyTorch versions >=1.9.0, which provides a more performant context manager for inference operations, potentially speeding up the process.
  • Backward Compatibility: The new decorator maintains compatibility with older versions of PyTorch by falling back to torch.no_grad() when necessary.
  • Simplified Codebase: By replacing individual torch.no_grad() calls with a decorator that checks the version and decides the context, the codebase becomes cleaner and more maintainable.

These updates aim to ensure that the YOLOv5 codebase remains modern and efficient, aligning with the latest best practices for PyTorch inference, while still supporting older versions of the framework seamlessly. Users should benefit from potential performance improvements during model inference without needing to alter their own usage of the library. πŸš€πŸ“ˆ

@glenn-jocher glenn-jocher self-assigned this Aug 13, 2022
@glenn-jocher glenn-jocher merged commit dc38cd0 into master Aug 13, 2022
@glenn-jocher glenn-jocher deleted the update/smart_inference_mode branch August 13, 2022 18:38
ctjanuhowski pushed a commit to ctjanuhowski/yolov5 that referenced this pull request Sep 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant