Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

additions and tests for torchscript inference (#5161) #5215

Closed
wants to merge 18 commits into from

Conversation

andreiionutdamian
Copy link
Contributor

@andreiionutdamian andreiionutdamian commented Oct 16, 2021

Support for torchscript inference added in detect.py. Also export.py received a minor modification: torchscript files are name-encoded based on device, batch, stride and resolution in order to restore and enforce same input while using in detect.py

πŸ› οΈ PR Summary

Made with ❀️ by Ultralytics Actions

🌟 Summary

Refinement of TorchScript model loading and export processes in YOLOv5.

πŸ“Š Key Changes

  • πŸ“ Added support for saving and loading additional configuration data with TorchScript models.
  • πŸ”§ Modified model loading to utilize preset parameters if available, impacting image size and detection behavior.
  • πŸ›  Enabled image size adjustment based on TorchScript model preferences during detection.
  • βœ”οΈ Added checks to avoid unnecessary data preprocessing if JIT model is used.
  • ♻️ Default device is now set to automatically utilize a GPU if available, without explicitly setting in argument.

🎯 Purpose & Impact

  • Enhanced Usability: By saving model configuration data (like input size) with TorchScript models, end-users get a smoother experience without needing to remember or specify extra parameters.
  • Improved Performance: The image size is automatically adjusted according to the model's default, which can lead to optimizations in speed and accuracy.
  • Simplified Workflow: Autodetection of GPU availability streamlines the process for users, removing the need for manual device specification and potentially improving accessibility for non-expert users.
  • Overall Usability & Flexibility: These changes lead to a more intuitive and potentially faster workflow for both loading and running inference with TorchScript models in the YOLOv5 ecosystem.

@glenn-jocher
Copy link
Member

@andreiionutdamian thanks for the PR!

This looks interesting, but I'm not sure embedding metadata in the filename is best practices. Is there a way to embed this metadata within the model itself?

@glenn-jocher
Copy link
Member

@dependabot rebase

@glenn-jocher
Copy link
Member

/rebase experiment after #5251 GH actions bump to 1.5

@glenn-jocher
Copy link
Member

/rebase experiment after #5255 adding Organization secret ACTIONS_TOKEN

@andreiionutdamian
Copy link
Contributor Author

@andreiionutdamian thanks for the PR!

This looks interesting, but I'm not sure embedding metadata in the filename is best practices. Is there a way to embed this metadata within the model itself?

True that. I added a small json ...torchscript.txt beside ....torchscript.pt.

@glenn-jocher
Copy link
Member

glenn-jocher commented Oct 25, 2021

@andreiionutdamian in the past we've seen user error on trying to load weights files with associated configuration files due to incorrect pairings, i.e. darknet YOLO that required weights and cfg files separately to load a model. Users invariably pair incorrect weight and cfg and then raise bug report on error.

I think if we want to store additional metadata that we can not incorporate directly into the model then we should save a dictionary with model and metadata fields, similar to how we handle pytorch model saving now:

yolov5/train.py

Lines 371 to 382 in 692be75

# Save model
if (not nosave) or (final_epoch and not evolve): # if save
ckpt = {'epoch': epoch,
'best_fitness': best_fitness,
'model': deepcopy(de_parallel(model)).half(),
'ema': deepcopy(ema.ema).half(),
'updates': ema.updates,
'optimizer': optimizer.state_dict(),
'wandb_id': loggers.wandb.wandb_run.id if loggers.wandb else None}
# Save last, best and delete
torch.save(ckpt, last)

I searched around online and also found the _extra_files field of torchscript that might be used for the same purpose.
https://stackoverflow.com/questions/56418938/what-is-the-right-usage-of-extra-files-arg-in-torch-jit-save

@andreiionutdamian
Copy link
Contributor Author

@andreiionutdamian in the past we've seen user error on trying to load weights files with associated configuration files due to incorrect pairings, i.e. darknet YOLO that required weights and cfg files separately to load a model. Users invariably pair incorrect weight and cfg and then raise bug report on error.

I think if we want to store additional metadata that we can not incorporate directly into the model then we should save a dictionary with model and metadata fields, similar to how we handle pytorch model saving now:

yolov5/train.py

Lines 371 to 382 in 692be75

# Save model
if (not nosave) or (final_epoch and not evolve): # if save
ckpt = {'epoch': epoch,
'best_fitness': best_fitness,
'model': deepcopy(de_parallel(model)).half(),
'ema': deepcopy(ema.ema).half(),
'updates': ema.updates,
'optimizer': optimizer.state_dict(),
'wandb_id': loggers.wandb.wandb_run.id if loggers.wandb else None}
# Save last, best and delete
torch.save(ckpt, last)

I searched around online and also found the _extra_files field of torchscript that might be used for the same purpose. https://stackoverflow.com/questions/56418938/what-is-the-right-usage-of-extra-files-arg-in-torch-jit-save

Hi @glenn-jocher, you are perfectly right, the less dependencies the better so now it is all within the jit graph file.

@glenn-jocher
Copy link
Member

@andreiionutdamian thanks for the updates! batch size variable is unused in detect.py, should we omit from config variables?

@andreiionutdamian
Copy link
Contributor Author

@andreiionutdamian thanks for the updates! batch size variable is unused in detect.py, should we omit from config variables?

@glenn-jocher frankly I would leave it there so you can use it if need be (if using a "graph" traced with batch size 1 doesn't allow forward with bigger batch size). In my tests I did not have this problem - all is restricted to H,W (as it should be) but I guess th.jit is still in early phases so who knows...

@andreiionutdamian
Copy link
Contributor Author

@andreiionutdamian thanks for the updates! batch size variable is unused in detect.py, should we omit from config variables?

@glenn-jocher I eliminated batch size from config vars loading.

@glenn-jocher
Copy link
Member

@andreiionutdamian thanks for the updates! Can you merge master please by running the following code? /rebase action appears to not work.

git remote add upstream https://github.com/ultralytics/yolov5.git
git fetch upstream
git checkout feature  # <----- replace 'feature' with local branch name
git merge upstream/master
git push -u origin -f

@glenn-jocher
Copy link
Member

glenn-jocher commented Nov 7, 2021

@andreiionutdamian since I've been unable to rebase this PR I've integrated your TorchScript updates into upcoming DetectMultiBackend PR #5549.

Closing this PR, using #5549 as replacement.

@andreiionutdamian thank you for your contributions to YOLOv5 πŸš€ and Vision AI ⭐

@andreiionutdamian
Copy link
Contributor Author

@andreiionutdamian since I've been unable to rebase this PR I've integrated your TorchScript updates into upcoming DetectMultiBackend PR #5549.

Closing this PR, using #5549 as replacement.

@andreiionutdamian thank you for your contributions to YOLOv5 πŸš€ and Vision AI ⭐

@glenn-jocher thank you and I do hope I will continue to contribute.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants