-
Notifications
You must be signed in to change notification settings - Fork 7.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add device argument for multi-backends access & Ascend NPU support #5285
Comments
I think this is an extremely useful feature to add...! |
Thanks for your approval, I‘ ll do this work and submit a PR soon |
ah ok great.. |
Hi @Programmer-RD-AI, sorry for the delay. I have submitted the PR #5321, review welcome! Thanks a lot :-) |
Great work! Could you share what models are supported, and how do they perform (in terms of speed, assuming that accuracy should be the same) compared to GPUs? Disclaimer: I'm not a maintainer of the project any more. |
Sure, only DeepLab model is tested by now, and the training is in process. I will do more validation work of more models if this pr is accepted by the community. I'll share the performace results soon and @ you when the results is posted. Thanks for your interest and welcome for review although you are not a maintainer now! |
🚀 Feature
--device
arg to determine the backend, and modify the hard code related to cuda to accelerator.torch-npu
adapter.Motivation & Examples
Currently, PyTorch supports many accelerators besides NVIDIA GPU, e.g., XLA devices (like TPUs), XPU, MPS and Ascend NPU. Adding a
--device
argument for users to specify the accelerator they would like to use is helpful. If this is acceptable for community, I would like to do this work.Moreover, on the basis of
device
arg, I would like to add support for Ascend NPU backend for detectron2.A tiny example
The modify of
_distributed_worker
func:Related Info
torch.device : https://pytorch.org/docs/stable/tensor_attributes.html#torch-device
torch Praviteuse1 (Registering new backend module to Pytorch) : https://pytorch.org/tutorials/advanced/privateuseone.html
The text was updated successfully, but these errors were encountered: