Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

If I reduce the image size Will the model become smaller? #2416

Closed
leeyunhome opened this issue Mar 10, 2021 · 9 comments
Closed

If I reduce the image size Will the model become smaller? #2416

leeyunhome opened this issue Mar 10, 2021 · 9 comments
Labels
question Further information is requested

Comments

@leeyunhome
Copy link

❔Question

Hello, @glenn-jocher

!python train.py --img 416 --batch 3 --epochs 500
If I reduce the image size
Will the model become smaller?

It was small when I experimented, but I don't know exactly, so I ask.

Thank you.

Additional context

@leeyunhome leeyunhome added the question Further information is requested label Mar 10, 2021
@glenn-jocher
Copy link
Member

@leeyunhome no, the model size is not a function of --img-size, they are independent.

@leeyunhome
Copy link
Author

@leeyunhome no, the model size is not a function of --img-size, they are independent.

Hello,

When the size of the .pt file is small, I have experienced a faster inference speed.
If the image-size is smaller, the size of the .pt file is also smaller
image

The unit is MB.

Thank you.

@glenn-jocher
Copy link
Member

@leeyunhome reduced image sizes will lead to reduced inference time. Model size is constant and not a function of any inference parameters.

@leeyunhome
Copy link
Author

@leeyunhome reduced image sizes will lead to reduced inference time. Model size is constant and not a function of any inference parameters.

Hello,

In addition to resizing the image, what affects the inference spped
Is it just the use of spcecialized engines like tensorRT?

Thank you.

@glenn-jocher
Copy link
Member

@leeyunhome using smaller models, smaller images, and batched inference will all speed up your inference time per image. Exporting to TensorRT should also speed up inference.

@leeyunhome
Copy link
Author

@leeyunhome using smaller models, smaller images, and batched inference will all speed up your inference time per image. Exporting to TensorRT should also speed up inference.

@glenn-jocher
The smallest model is yolov5s
Batch inference means that you bundle multiple images and process them in batches?

@glenn-jocher
Copy link
Member

@leeyunhome yes. If you use batch size of 16 for example then you should see faster speeds per image than running batch size 1 in a for loop for each image.

@leeyunhome
Copy link
Author

@leeyunhome yes. If you use batch size of 16 for example then you should see faster speeds per image than running batch size 1 in a for loop for each image.

@glenn-jocher

image

When training, I set the batch size to 16
The batch size given as a factor affects the learning speed, right?

When inferring, do I have to set the batch size separately?

@glenn-jocher
Copy link
Member

@leeyunhome training results are robust to variations in batch size (#2377) detect.py runs at --batch 1 for the most part. You can explore batch size speedups in test.py:

python test.py --data coco128.yaml --weights yolov5s.pt --batch 1
python test.py --data coco128.yaml --weights yolov5s.pt --batch 16
python test.py --data coco128.yaml --weights yolov5s.pt --batch 64

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants