Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Throughput/Inference Throughput issue in benchmark.py #209

Closed
brm738 opened this issue Apr 8, 2022 · 2 comments
Closed

Throughput/Inference Throughput issue in benchmark.py #209

brm738 opened this issue Apr 8, 2022 · 2 comments
Assignees
Milestone

Comments

@brm738
Copy link

brm738 commented Apr 8, 2022

When running benchmark.py values of throughput printed in the terminal during program execution does not correspond to inference throughput saved in the result file <model_name>_.csv (example patchcore_gpu.csv or patchcore_cpu.csv).
Values printed in the terminal seems to be OK. Values saved in the file however differs quite significantly from them, especially when running testing/inference on GPUs
My setup machine consist of 4GPUs Nvidia A100-SXM4-40GB.

@ashwinvaidya17
Copy link
Collaborator

@brm738 thanks for noticing this discrepancy! The models use a callback to calculate inference time and throughput. That's what gets printed on the terminal. The benchmarking script computes time taken without using the callback. Also, The models report batch throughput while the benchmarking reports throughput on sequential images. This is because currently the torch and openvino inferencers are used on streaming/individual images. I'll update the benchmarking script to remove this ambiguity.

@ashwinvaidya17
Copy link
Collaborator

Closing as it is addressed in PR #221

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

No branches or pull requests

3 participants