Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add GPU video encoder, rankers and segmenters #169

Closed
3 of 5 tasks
jakobkruse1 opened this issue Sep 8, 2021 · 7 comments
Closed
3 of 5 tasks

Add GPU video encoder, rankers and segmenters #169

jakobkruse1 opened this issue Sep 8, 2021 · 7 comments
Assignees

Comments

@jakobkruse1
Copy link
Contributor

jakobkruse1 commented Sep 8, 2021

There are still some GPU-supporting executors missing. This issue finishes the GPU support for the executors.
If there are more executors which are missing GPU support, add them to the list.

Video Encoders:

  • VideoTorchEncoder

Segmenters:

  • TorchObjectDetectionSegmenter
  • VADSpeechSegmenter (not possible)
  • YoloV5Segmenter (postponed)

Rankers:

  • DPRReaderRanker (in review)
@jakobkruse1 jakobkruse1 changed the title Add GPU rankers and segmenters Add GPU video encoder, rankers and segmenters Sep 8, 2021
@jacobowitz jacobowitz self-assigned this Sep 10, 2021
@jacobowitz
Copy link
Contributor

I dont think YoloV5Segmenter is actually missing GPU support. It can already be run on GPU, but there is no distinction between cpu/gpu image. We could add it, by forcing CPU version of Pytorch, but I think its not a good idea as it is only an indirect dependency added by YoloV5. As far as I see Yolov5 does not support cpu only

@tadejsv
Copy link
Contributor

tadejsv commented Sep 10, 2021

Why does it not support CPU only? Can you not do model.to('cpu')?

@jacobowitz
Copy link
Contributor

sure, I mean dependency wise. There is no yolov5[cpu] or so.
yolov5 always uses the full torch version with gpu support. So if we want to split into cpu/gpu versions, we would need to override pytorch in our requirements. We can certainty do so, but the issue is that we need to change also our requirements then if they change the upstream requirements which is a bit annoying?

@tadejsv
Copy link
Contributor

tadejsv commented Sep 10, 2021

Since we pin our requirements to an exact version, we won't have this problem unless we are changing our requirements intentionally, so I think this is managable. I would still go ahead and create a CPU-only version

@jacobowitz
Copy link
Contributor

VADSpeechSegmenter can not run on GPU actually. I tried to add support and the inference fails with this message

RuntimeError: Could not run 'quantized::linear_dynamic' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'quantized::linear_dynamic' is only available for these backends: [CPU, BackendSelect, Named, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, UNKNOWN_TENSOR_TYPE_ID, AutogradMLC, Tracer, Autocast, Batched, VmapMode]

Also there is pytorch/pytorch#42288

@jacobowitz
Copy link
Contributor

DPRReaderRanker: #198
TorchObjectDetectionSegmenter: #199
VideoTorchEncoder: #180

@jacobowitz
Copy link
Contributor

I suggest to remove the Yolov5Segenter from the scope of this ticket and close it once the related PRs are merged.
For yolov5 I've created a separate issue here #200

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants