Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update train.py for val.run(half=amp) #8804

Merged
merged 1 commit into from
Jul 31, 2022
Merged

Conversation

glenn-jocher
Copy link
Member

@glenn-jocher glenn-jocher commented Jul 31, 2022

Disable FP16 validation if AMP checks fail or amp=False.

May partially resolve #7908

πŸ› οΈ PR Summary

Made with ❀️ by Ultralytics Actions

🌟 Summary

Improved validation inference with automatic mixed precision (AMP) during training.

πŸ“Š Key Changes

  • Added the use of automatic mixed precision (AMP) to the validation step in the training process.

🎯 Purpose & Impact

  • 🎯 Purpose: To enhance the efficiency and speed of the model validation phase by utilizing AMP.
  • πŸ’‘ Impact: Users may experience faster validation times and reduced memory usage while training models, potentially improving the overall training performance on compatible hardware.

Disable FP16 validation if AMP checks fail or amp=False.
@glenn-jocher glenn-jocher merged commit 59595c1 into master Jul 31, 2022
@glenn-jocher glenn-jocher deleted the glenn-jocher-patch-1 branch July 31, 2022 02:17
@glenn-jocher glenn-jocher self-assigned this Jul 31, 2022
ctjanuhowski pushed a commit to ctjanuhowski/yolov5 that referenced this pull request Sep 8, 2022
Disable FP16 validation if AMP checks fail or amp=False.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

NaN tensor values problem for GTX16xx users (no problem on other devices)
1 participant