-
-
Notifications
You must be signed in to change notification settings - Fork 15.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
If I reduce the image size Will the model become smaller? #2416
Comments
@leeyunhome no, the model size is not a function of --img-size, they are independent. |
Hello, When the size of the .pt file is small, I have experienced a faster inference speed. The unit is MB. Thank you. |
@leeyunhome reduced image sizes will lead to reduced inference time. Model size is constant and not a function of any inference parameters. |
Hello, In addition to resizing the image, what affects the inference spped Thank you. |
@leeyunhome using smaller models, smaller images, and batched inference will all speed up your inference time per image. Exporting to TensorRT should also speed up inference. |
@glenn-jocher |
@leeyunhome yes. If you use batch size of 16 for example then you should see faster speeds per image than running batch size 1 in a for loop for each image. |
When training, I set the batch size to 16 When inferring, do I have to set the batch size separately? |
@leeyunhome training results are robust to variations in batch size (#2377) detect.py runs at --batch 1 for the most part. You can explore batch size speedups in test.py: python test.py --data coco128.yaml --weights yolov5s.pt --batch 1
python test.py --data coco128.yaml --weights yolov5s.pt --batch 16
python test.py --data coco128.yaml --weights yolov5s.pt --batch 64 |
❔Question
Hello, @glenn-jocher
!python train.py --img 416 --batch 3 --epochs 500
If I reduce the image size
Will the model become smaller?
It was small when I experimented, but I don't know exactly, so I ask.
Thank you.
Additional context
The text was updated successfully, but these errors were encountered: