Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add device argument to PyTorch Hub models #3104

Merged
merged 3 commits into from
May 16, 2021

Conversation

cgerum
Copy link
Contributor

@cgerum cgerum commented May 10, 2021

For my usecase I would like to load the pretrained models from torchhub to cpu even if cuda is available.

This merge request adds an optional parameter device to the hubconf.py functions to allow manual selection of target devices.

🛠️ PR Summary

Made with ❤️ by Ultralytics Actions

🌟 Summary

Added model device selection support to YOLOv5 model creation functions.

📊 Key Changes

  • Introduced an additional device parameter to the _create() function and all model creator functions (e.g., yolov5s, yolov5m, etc.).
  • The device parameter allows explicit selection of the computing device (CPU or GPU) where the model parameters will be loaded.

🎯 Purpose & Impact

  • Flexibility: Users can now specify the device where they want to load the model, improving usability for systems with multiple GPUs or special hardware configurations.
  • Convenience: This change makes it easier to deploy models in different environments (e.g., servers, laptops with or without GPU) by simplifying device management in code.
  • Control: Advanced users gain finer control over resource allocation, which can improve performance and efficiency when running models.

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👋 Hello @cgerum, thank you for submitting a 🚀 PR! To allow your work to be integrated as seamlessly as possible, we advise you to:

  • ✅ Verify your PR is up-to-date with origin/master. If your PR is behind origin/master an automatic GitHub actions rebase may be attempted by including the /rebase command in a comment body, or by running the following code, replacing 'feature' with the name of your local branch:
git remote add upstream https://github.com/ultralytics/yolov5.git
git fetch upstream
git checkout feature  # <----- replace 'feature' with local branch name
git rebase upstream/master
git push -u origin -f
  • ✅ Verify all Continuous Integration (CI) checks are passing.
  • ✅ Reduce changes to the absolute minimum required for your bug fix or feature addition. "It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is." -Bruce Lee

@cgerum
Copy link
Contributor Author

cgerum commented May 10, 2021

/rebase

@glenn-jocher
Copy link
Member

@cgerum thanks for the PR! Is there a downside to simply sending the model to CPU after it's created using the normal PyTorch commands like this?

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s').cpu()  # send model to CPU

@cgerum
Copy link
Contributor Author

cgerum commented May 11, 2021

@cgerum thanks for the PR! Is there a downside to simply sending the model to CPU after it's created using the normal PyTorch commands like this?

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s').cpu()  # send model to CPU

This approach has two problems:

  1. We are running on a multiuser system and this approach fails when GPU:0 does not have enough memory available.
  2. When trying to use multiple networks, in parallel we need to keep GPU:0 free until the last network is loaded.

We can probably work around both problems, by carefully managing our GPU resources and modifying CUDA_VISIBLE_DEVICES, but the client code would be much easier if construction on a CPU device could be allowed.

@glenn-jocher glenn-jocher changed the title Allow to manual selection of device for torchhub models Add 'device' argument to PyTorch Hub models May 16, 2021
@glenn-jocher glenn-jocher changed the title Add 'device' argument to PyTorch Hub models Add device argument to PyTorch Hub models May 16, 2021
@glenn-jocher
Copy link
Member

@cgerum I've updated this a bit. It's ready to merge on my end, just waiting on the the CI tests to run, they seem unavailable/queued today for unknown reasons.

@cgerum
Copy link
Contributor Author

cgerum commented May 16, 2021

Thanks a lot, for me it works like a charm.

@glenn-jocher
Copy link
Member

@cgerum ok, I'll go ahead and merge then if the PR works on your system. Hopefully the actions sort themselves out tomorrow. Thank you for your contributions!

@glenn-jocher glenn-jocher merged commit b133baa into ultralytics:master May 16, 2021
Lechtr pushed a commit to Lechtr/yolov5 that referenced this pull request Jul 20, 2021
* Allow to manual selection of device for torchhub models

* single line device

nested torch.device(torch.device(device)) ok

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
(cherry picked from commit b133baa)
BjarneKuehl pushed a commit to fhkiel-mlaip/yolov5 that referenced this pull request Aug 26, 2022
* Allow to manual selection of device for torchhub models

* single line device

nested torch.device(torch.device(device)) ok

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants