Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sort currently does not support bool dtype on CUDA: Regression in version 0.7.3 + #981

Closed
Chris-hughes10 opened this issue Apr 23, 2022 · 4 comments · Fixed by #983
Closed
Labels
bug / fix Something isn't working help wanted Extra attention is needed
Milestone

Comments

@Chris-hughes10
Copy link

🐛 Bug

When I try to compute MaP on the GPU with no predictions, I receive the error:RuntimeError: Sort currently does not support bool dtype on CUDA.

This occurs in torchmetrics==0.7.3 and torchmetrics==0.8.0, the error does not occur in 0.7.2.

To Reproduce

Steps to reproduce the behavior...

Traceback (most recent call last):
  File "C:/Users/hughesc/OneDrive - Microsoft/Documents/Git/pytorch-accelerated/pytorch_accelerated/exp.py", line 24, in <module>
    print(map.compute())
  File "C:\Users\hughesc\Anaconda3\envs\accelerated-dev\lib\site-packages\torchmetrics\metric.py", line 440, in wrapped_func
    value = compute(*args, **kwargs)
  File "C:\Users\hughesc\Anaconda3\envs\accelerated-dev\lib\site-packages\torchmetrics\detection\mean_ap.py", line 765, in compute
    precisions, recalls = self._calculate(classes)
  File "C:\Users\hughesc\Anaconda3\envs\accelerated-dev\lib\site-packages\torchmetrics\detection\mean_ap.py", line 627, in _calculate
    recall, precision, scores = MeanAveragePrecision.__calculate_recall_precision_scores(
  File "C:\Users\hughesc\Anaconda3\envs\accelerated-dev\lib\site-packages\torchmetrics\detection\mean_ap.py", line 697, in __calculate_recall_precision_scores
    inds = torch.argsort(det_scores, descending=True)
RuntimeError: Sort currently does not support bool dtype on CUDA.

Code sample

import torch
from torchmetrics.detection import MeanAveragePrecision

if __name__ == '__main__':

    device = torch.device('cuda:0')

    preds = [
        dict( boxes=torch.tensor([], device=device),
        scores=torch.tensor([], device=device),
        labels=torch.tensor([], device=device),)
    ]

    targets = [
        dict(boxes=torch.tensor([[1., 2., 3., 4.]], device=device),
             scores=torch.tensor([0.8], device=device),
             labels=torch.tensor([1], device=device), )
    ]

    map = MeanAveragePrecision(class_metrics=True).to(device)

    map.update(preds, targets)

    print(map.compute())

Expected behavior

I would like to be able to compute the metric on the GPU

Environment

  • OS (e.g., Linux): Linux & Windows
  • Python & PyTorch Version (e.g., 1.0): Python 3.8 & 3.9, PyTorch 1.10
  • How you installed PyTorch (conda, pip, build command if you used source): conda
  • Any other relevant information:
@Chris-hughes10 Chris-hughes10 added bug / fix Something isn't working help wanted Extra attention is needed labels Apr 23, 2022
@github-actions
Copy link

Hi! thanks for your contribution!, great first issue!

@krshrimali
Copy link
Contributor

krshrimali commented Apr 23, 2022

Hi, @Chris-hughes10 - Great first issue! 🎉 I love the description and thanks for sharing the code for reproducing the bug.

I can confirm that this is a regression from release 0.7.2, IMO: an explicit cast from torch.bool to torch.uint8 (on CUDA only) while applying torch.argsort should fix this. Looks like PyTorch doesn't support sorting for boolean dtypes on CUDA devices:

// FIXME: remove this check once cub sort supports bool
TORCH_CHECK(self_dtype != ScalarType::Bool,
  "Sort currently does not support bool dtype on CUDA.");

https://github.com/pytorch/pytorch/blob/1a7e43be141ce01469d7605075cb1008bf19abd7/aten/src/ATen/native/cuda/Sort.cpp#L80

To maintainers: we don't test for boolean inputs (I only see floating point inputs) and this issue only comes when we pass empty preds.

@Chris-hughes10 : In case it's too important and cannot wait till Monday, please let me know. I can create a branch with the fix. I just want to wait for everyone's opinion on this, as they might have more context. :)

cc: @Borda @SkafteNicki @stancld for their inputs

@Borda
Copy link
Member

Borda commented Apr 24, 2022

@krshrimali mind sending PR with fix?

@krshrimali
Copy link
Contributor

@krshrimali mind sending PR with fix?

Done :))

@Borda Borda added this to the v0.8 milestone May 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug / fix Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants