Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[YOLOv5] Change default cache from ram to disk #257

Merged
merged 1 commit into from
Jul 12, 2023
Merged

Conversation

dsikka
Copy link
Contributor

@dsikka dsikka commented Jul 11, 2023

For some quick benchmarking, I had two training_aware runs using the following model and data. One was with disk caching and one was with ram. For the first 50 epochs, the training time with disk was about 5 extra minutes (about an 8% slowdown) for disk caching vs ram.

Compared the following on disk caching vs ram for the first 50 epochs:

  • Model: zoo:cv/detection/yolov5-n/pytorch/ultralytics/coco/base-none
  • Data: VOC.yaml
  • Use Case: cv-detection
  • 1 NVIDIA RTX A4000 GPU

For the sake of time, I only looked at the time for the first 50 epochs but we could potentially consider a longer running job for additional benchmarking

To go back to ram, you can provide the the cache kwarg and set it to ram

@dsikka dsikka marked this pull request as draft July 11, 2023 15:56
Copy link
Member

@bfineran bfineran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you give a rough estimate of performance with disk vs ram for a local run? LGTM after that

@dsikka dsikka marked this pull request as ready for review July 11, 2023 16:00
@dsikka dsikka closed this Jul 11, 2023
@dsikka dsikka reopened this Jul 11, 2023
@dsikka dsikka merged commit 9c7d037 into main Jul 12, 2023
6 checks passed
@dsikka dsikka deleted the update_default_cache branch July 12, 2023 15:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants