Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flake8 Fixes (Part 2) #1552

Merged
merged 14 commits into from
Oct 22, 2021
Merged

Flake8 Fixes (Part 2) #1552

merged 14 commits into from
Oct 22, 2021

Conversation

laserprec
Copy link
Contributor

@laserprec laserprec commented Oct 19, 2021

Description

This is the part 2 of the flake8 fixes. For part 1, please see #1550.

The following flake8 issues are addressed in this PR. For lines contained in our code coverage, source code fixes are applied whenever possible. For lines outside our code coverage, flake8 exception is made as an inline comment (e.g. # noqa: <PEP 8 RULE>).

Coding Convention Issues

  • E712 comparison to True should be 'if cond is True:' or 'if cond:'
  • E711 comparison to None should be 'if cond is None:'
  • E722 do not use bare 'except' **
  • E741 ambiguous variable name 'l'
  • F403 'from X import *' used; unable to detect undefined names
  • F405 'Dense' may be undefined, or defined from star imports: tensorflow.keras.layers
  • W605 invalid escape sequence

Critical issues (can be indication of potential bugs)

  • F821 undefined name 'K' **

For Reviewers, please direct your attention to commits marked with (**), as they may indicate some critical logical issues.

Checklist:

  • I have followed the contribution guidelines and code style for this project.
  • I have added tests covering my contributions.
  • I have updated the documentation accordingly.
  • This PR is being made to staging branch and not to main branch.

@laserprec laserprec changed the base branch from laserprec/flake8-fix to staging October 19, 2021 20:48
@laserprec laserprec linked an issue Oct 19, 2021 that may be closed by this pull request
@@ -221,7 +221,7 @@ def evaluation_log_hook(
batch_size (int): Number of samples fed into the model at a time.
Note, the batch size doesn't affect on evaluation results.
eval_fns (iterable of functions): List of evaluation functions that have signature of
(true_df, prediction_df, \*\*eval_kwargs)->(float). If None, loss is calculated on true_df.
(true_df, prediction_df, **eval_kwargs)->(float). If None, loss is calculated on true_df.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this going to have a different effect i.e. make the text bold? https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html?highlight=asterisk#inline-markup

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this bolds the text only if it is encapsulated on both sides ( **eval_kwargs**), but I can compile the docs and confirm.

Copy link
Collaborator

@anargyri anargyri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comments

@codecov-commenter
Copy link

codecov-commenter commented Oct 21, 2021

Codecov Report

Merging #1552 (29ae297) into staging (b2b28c9) will increase coverage by 62.13%.
The diff coverage is 33.33%.

Impacted file tree graph

@@             Coverage Diff              @@
##           staging    #1552       +/-   ##
============================================
+ Coverage     0.00%   62.13%   +62.13%     
============================================
  Files           84       84               
  Lines         8436     8437        +1     
============================================
+ Hits             0     5242     +5242     
- Misses           0     3195     +3195     
Flag Coverage Δ
nightly ?
pr-gate 62.13% <33.33%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
recommenders/datasets/amazon_reviews.py 83.05% <0.00%> (+83.05%) ⬆️
recommenders/datasets/criteo.py 86.79% <0.00%> (+86.79%) ⬆️
recommenders/datasets/mind.py 0.00% <0.00%> (ø)
...menders/models/deeprec/models/graphrec/lightgcn.py 51.47% <0.00%> (+51.47%) ⬆️
recommenders/models/ncf/dataset.py 92.03% <0.00%> (+92.03%) ⬆️
recommenders/models/newsrec/models/base_model.py 30.48% <0.00%> (+30.48%) ⬆️
recommenders/models/newsrec/models/layers.py 52.99% <0.00%> (+52.99%) ⬆️
recommenders/models/vae/multinomial_vae.py 0.00% <0.00%> (ø)
recommenders/models/vae/standard_vae.py 0.00% <0.00%> (ø)
recommenders/utils/notebook_memory_management.py 0.00% <0.00%> (ø)
... and 29 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b2b28c9...29ae297. Read the comment docs.

@anargyri
Copy link
Collaborator

LGTM

@laserprec laserprec merged commit b4594a1 into staging Oct 22, 2021
@laserprec laserprec deleted the laserprec/flake8-fix-p2 branch October 22, 2021 16:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Flake8 Issues
3 participants