Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faster Accuracy metric #2775

Merged
merged 14 commits into from
Aug 6, 2020
Merged

Faster Accuracy metric #2775

merged 14 commits into from
Aug 6, 2020

Conversation

Diuven
Copy link
Contributor

@Diuven Diuven commented Jul 31, 2020

What does this PR do?

Speeding up the Accuracy metric for classification tasks.

Fixes #2722

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together? Otherwise, we ask you to create a separate PR for every change.
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?
  • Did you verify new and existing tests pass locally with your changes?
  • If you made a notable change (that affects users), did you update the CHANGELOG?

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

@mergify mergify bot requested a review from a team July 31, 2020 09:03
@Diuven Diuven mentioned this pull request Jul 31, 2020
@Borda Borda added the feature Is an improvement or enhancement label Jul 31, 2020
@Borda Borda added this to in Progress in Metrics package via automation Jul 31, 2020
@pep8speaks
Copy link

pep8speaks commented Jul 31, 2020

Hello @Diuven! Thanks for updating this PR.

Line 135:120: E501 line too long (127 > 119 characters)

Comment last updated at 2020-08-06 07:45:04 UTC

@justusschock
Copy link
Member

@SkafteNicki @Borda @PyTorchLightning/core-contributors I think we now have to decide if we want to use the given num_classes or the calculated ones (especially if they don't match). Opinions?

@mergify
Copy link
Contributor

mergify bot commented Aug 2, 2020

This pull request is now in conflict... :(

@codecov
Copy link

codecov bot commented Aug 3, 2020

Codecov Report

Merging #2775 into master will increase coverage by 1%.
The diff coverage is 98%.

@@           Coverage Diff           @@
##           master   #2775    +/-   ##
=======================================
+ Coverage      88%     90%    +1%     
=======================================
  Files          78      78            
  Lines        7069    7086    +17     
=======================================
+ Hits         6250    6371   +121     
+ Misses        819     715   -104     

@Diuven
Copy link
Contributor Author

Diuven commented Aug 3, 2020

@SkafteNicki @Borda @PyTorchLightning/core-contributors I think we now have to decide if we want to use the given num_classes or the calculated ones (especially if they don't match). Opinions?

Actually, I updated the code to do the same thing with the previous code when it gets different num_classes from the real number of classes. So if you want to put off the decision or want to keep the behaviour, you can do so.

I think this behaviour (using the results only in the range specified by the num_classes) can be useful when the user want the stats of the certain classes.

But I do think that this is quite counter-intuitive, and removing it will make the function more clear and faster. We can just remove this behaviour for now and add it later on, with well designed structure.

This depends on the core developers' opinions, so I'm willing to update the PR correspondingly. Please share your thoughts! 😃

@mergify mergify bot requested a review from a team August 3, 2020 07:36
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
@mergify mergify bot requested a review from a team August 5, 2020 22:29
@Diuven
Copy link
Contributor Author

Diuven commented Aug 6, 2020

I'm keep getting some pip install errors on CI test intermittently. Can someone check if some code of mine are causing such problems?

@justusschock
Copy link
Member

@Diuven This keeps happening from time to time, although we don't know why. Usually retriggering helps (what I did now) :)

Copy link
Member

@justusschock justusschock left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, just the last missing docstring line :)

@mergify mergify bot requested a review from a team August 6, 2020 07:40
@justusschock justusschock requested a review from Borda August 6, 2020 07:47
Metrics package automation moved this from in Progress to in Review Aug 6, 2020
@Borda Borda merged commit ac4a215 into Lightning-AI:master Aug 6, 2020
Metrics package automation moved this from in Review to Done Aug 6, 2020
@Diuven Diuven mentioned this pull request Aug 8, 2020
7 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature Is an improvement or enhancement
Projects
No open projects
Development

Successfully merging this pull request may close these issues.

Slow accuracy metric
6 participants