Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix inconsistent benchmarking throughput/time #221

Merged
merged 4 commits into from
Apr 12, 2022

Conversation

ashwinvaidya17
Copy link
Collaborator

Description

Changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Checklist

  • My code follows the pre-commit style and check guidelines of this project.
  • I have performed a self-review of my code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing tests pass locally with my changes

@ashwinvaidya17 ashwinvaidya17 added the Bug Something isn't working label Apr 11, 2022
@@ -218,6 +256,8 @@ def sweep(run_config: Union[DictConfig, ListConfig], device: int = 0, seed: int

# Run benchmarking for current config
model_metrics = get_single_model_metrics(model_config=model_config, openvino_metrics=convert_openvino)
print(model_config.model.name, model_config.dataset.category)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we want to run benchmark.py for custom datasets that don't have category field?

Comment on lines 51 to 53
logging.getLogger("pytorch_lightning").setLevel(logging.ERROR)
logging.getLogger("torchmetrics").setLevel(logging.ERROR)
logging.getLogger("os").setLevel(logging.ERROR)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we could set the level in a for loop to avoid the duplicated lines

Copy link
Contributor

@samet-akcay samet-akcay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, just minor stuff

Copy link
Contributor

@djdameln djdameln left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This works, but there will still be inconsistency in the throughput numbers between a normal training run and the benchmarking script. Maybe to avoid confusion, you could also add the batch size to the output in the timer callback Something like Throughput (batch_size=16): 12 FPS

@samet-akcay samet-akcay merged commit 487ff45 into development Apr 12, 2022
@samet-akcay samet-akcay deleted the fix/av/throughput_csv_209 branch April 12, 2022 12:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Error while running benchmark.py on GPU
3 participants