You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running benchmark.py values of throughput printed in the terminal during program execution does not correspond to inference throughput saved in the result file <model_name>_.csv (example patchcore_gpu.csv or patchcore_cpu.csv).
Values printed in the terminal seems to be OK. Values saved in the file however differs quite significantly from them, especially when running testing/inference on GPUs
My setup machine consist of 4GPUs Nvidia A100-SXM4-40GB.
The text was updated successfully, but these errors were encountered:
@brm738 thanks for noticing this discrepancy! The models use a callback to calculate inference time and throughput. That's what gets printed on the terminal. The benchmarking script computes time taken without using the callback. Also, The models report batch throughput while the benchmarking reports throughput on sequential images. This is because currently the torch and openvino inferencers are used on streaming/individual images. I'll update the benchmarking script to remove this ambiguity.
When running benchmark.py values of throughput printed in the terminal during program execution does not correspond to inference throughput saved in the result file <model_name>_.csv (example patchcore_gpu.csv or patchcore_cpu.csv).
Values printed in the terminal seems to be OK. Values saved in the file however differs quite significantly from them, especially when running testing/inference on GPUs
My setup machine consist of 4GPUs Nvidia A100-SXM4-40GB.
The text was updated successfully, but these errors were encountered: