Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add File Size (MB) column to benchmarks #8359

Merged
merged 3 commits into from
Jun 27, 2022
Merged

Conversation

glenn-jocher
Copy link
Member

@glenn-jocher glenn-jocher commented Jun 27, 2022

Adds a file size column to benchmarks output, i.e.:

Benchmarks complete (97.27s)
                   Format  Size (MB)  mAP@0.5:0.95  Inference time (ms)
0                 PyTorch        3.9        0.2927                50.13
1             TorchScript        7.4        0.2927                57.51
2                    ONNX        7.2        0.2927                34.06
3                OpenVINO        7.5        0.2927                17.43
4                TensorRT        NaN           NaN                  NaN
5                  CoreML        NaN           NaN                  NaN
6   TensorFlow SavedModel        7.3        0.2927                64.48
7     TensorFlow GraphDef        7.3        0.2927                68.57
8         TensorFlow Lite        3.7        0.2933                18.20
9     TensorFlow Edge TPU        NaN           NaN                  NaN
10          TensorFlow.js        NaN           NaN                  NaN

πŸ› οΈ PR Summary

Made with ❀️ by Ultralytics Actions

🌟 Summary

Enhancement of benchmarking output with model file size metric.

πŸ“Š Key Changes

  • Added file_size() function call to include the file size of the model in the benchmark results.
  • Adjusted benchmark result data structure to incorporate model file size.
  • Updated DataFrame columns to reflect the new structure with model file size information.

🎯 Purpose & Impact

  • Purpose: To provide users with more comprehensive information on model performance by including model size alongside accuracy (mAP) and inference time.
  • Impact: Users can now consider model size when evaluating model performance, which is particularly important for deploying models on resource-constrained environments. πŸ“‰πŸ“²

@glenn-jocher glenn-jocher self-assigned this Jun 27, 2022
@glenn-jocher glenn-jocher merged commit 34df503 into master Jun 27, 2022
@glenn-jocher glenn-jocher deleted the update/benchmarks branch June 27, 2022 15:46
ctjanuhowski pushed a commit to ctjanuhowski/yolov5 that referenced this pull request Sep 8, 2022
* Add filesize to benchmarks.py

* Add filesize to benchmarks.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant