Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AveragePrecision Broken on Master #5926

Closed
peblair opened this issue Feb 11, 2021 · 2 comments · Fixed by #5939
Closed

AveragePrecision Broken on Master #5926

peblair opened this issue Feb 11, 2021 · 2 comments · Fixed by #5939
Assignees
Labels
bug Something isn't working help wanted Open to be worked on priority: 0 High priority task

Comments

@peblair
Copy link
Contributor

peblair commented Feb 11, 2021

🐛 Bug

On master (currently 4857546), the implementation of __hash__ on PyTorch Lightning metrics causes an exception to be raised for metric implementations which make use of non-hashable state. One such exception is AveragePrecision, meaning that any model which uses it is currently broken on master.

Please reproduce using the BoringModel

https://colab.research.google.com/drive/1sSMuMoLyeOEtNupTOPC7YuLQonVxBt4H?usp=sharing

To Reproduce

See above BoringModel. All that's needed to reproduce the bug is installing master (currently 4857546) and adding an AveragePrecision metric to a model. This exception is raised (copied from above Colab link):

/usr/local/lib/python3.6/dist-packages/pytorch_lightning/metrics/metric.py in __hash__(self)
    336             hash_vals.append(getattr(self, key))
    337 
--> 338         return hash(tuple(hash_vals))
    339 
    340     def __add__(self, other: Any):

TypeError: unhashable type: 'list'

Expected behavior

Not crashing 🙂

Environment

  • CUDA:
    • GPU:
      • Tesla T4
    • available: True
    • version: 10.1
  • Packages:
    • numpy: 1.19.5
    • pyTorch_debug: True
    • pyTorch_version: 1.7.0+cu101
    • pytorch-lightning: 1.2.0dev
    • tqdm: 4.41.1
  • System:
    • OS: Linux
    • architecture:
      • 64bit
    • processor: x86_64
    • python: 3.6.9
    • version: Proposal for help #1 SMP Thu Jul 23 08:00:38 PDT 2020

Additional context

@peblair peblair added bug Something isn't working help wanted Open to be worked on labels Feb 11, 2021
@peblair
Copy link
Contributor Author

peblair commented Feb 11, 2021

Now, the "correct" solution here would be to migrate all metrics using non-hashable state to use hashable alternatives. I have an alternative solution on this branch, which circumvents the issue by allowing lists to be used as state, so long as the contents of those lists are hashable. I leave the decision whether to merge this or go down another route up to the maintainers, but let me know if you would like me to open a Pull Request with this branch's change.

@Borda Borda added the priority: 0 High priority task label Feb 11, 2021
@SkafteNicki
Copy link
Member

@peblair thanks for catching this bug. Please feel free to open a PR with your suggested solution. As metric states that are list should only contain torch.Tensor they should always be hasable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Open to be worked on priority: 0 High priority task
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants