-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix false positives for superfluous-parens
#4784
Fix false positives for superfluous-parens
#4784
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow, so much issue closed in one MR :) Thank you for doing the project management works too and checking all the issue with the problem !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add something in the changelog and in 2.10.0's whatsnew ? :)
Not sure what is going on: |
e9133ad
to
b828d20
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is strange, the tests aren't usually flaky. Maybe the first run was with the current master ?
I guess so. However, when I copy the code of if not (my_var := False):
print(my_var)
print([i ** 2 for i in (range(10) if 7 > 5 else range(3))])
def a_function():
return (x for x in ((3, 4))) # Bad at col 25
print([i ** 2 for i in ((3, 4))]) # Bad at col 25
if not ((odd := 1)): # Bad at col 9
pass However, the following messages raise errors in the test of ( # Line 187
Message("superfluous-parens", line=1, args="in"),
"return (x for x in ((3, 4)))",
0,
)
(Message("superfluous-parens", line=1, args="not"), "if not ((odd := 1)):", 0), # Line 215
I believe those messages are tested in the test file in the 4th and 5th test. Any idea why it does work when replacing the code in the locally installed |
I don't see any problem with the unittest but this is arder to udnerstand than functional tests. Could you try to create a functional tests in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should not remove valid use cases from the functional tests.
This fixes the false positives identified in pylint-dev#2818, pylint-dev#3249, pylint-dev#3608 & pylint-dev#4346 All false positives reported fell under keywords before walrus operator or if-keyword within generators/comprehension. This closes pylint-dev#2818, closes pylint-dev#3429, closes pylint-dev#3608, closes pylint-dev#4346
137e469
to
9a2ce9a
Compare
contains_walrus_operator = False | ||
continue | ||
return | ||
# The empty tuple () is always accepted. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I moved this comment to 416 because I believe it cover 417-418 and is misplaced in the current code.
Force-pushed to clean up the commit history. First commit covers splitting the functional tests for Second commit covers changes to the code and to the unittest. The codes passes the functional tests but fails the unittests. I don't know why, so will need some guidance to fix this before this PR can be undrafted. Once again, sorry for creating this PR before it was ready. I thought it was finished but apparently not. I'm going to fix my local development environment before doing any additional work so I have better test results before creating PR's. With 15+ tests always failing I often miss cases where 16 instead of 15 tests fail... |
Don't worry, the continuous integration is here for checking on every environment and python-interpreters and it's expected that fixing everything might take some time :) |
Do you have any idea why the unit tests are failing? |
I made some tests locally, I pushed the result. I don't know exactly why but |
I think the unittest is failing on |
I rarely use the unittest myself, if it's a pain to do let's convert to functional tests :) |
All tests are now passing. Please see if you think all cases are covered in the tests, especially those mentioned in #3249. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for fixing the tests ! I think once we incorporate use case from the pinned issue we can merge :)
Co-authored-by: Pierre Sassoulas <pierre.sassoulas@gmail.com>
for more information, see https://pre-commit.ci
I have updated the tests, let me know what you think. I have removed these two tests: I = ("Version " + G) in F
assert "" + ("Version " + G) in F The current checker focuses on unnecessary parens after keywords. These tests cover unnecessary parens with string combination. Because of if tokens[start + 1].string != "(":
return on line |
Sure thing, this MR already fix so much issue at once already ! What do you think of letting the test case in the functional test without the |
Sounds good to me, but perhaps first create the issue so we can link to it in the TODO? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing work, thank you ! 👏
To ease the process of reviewing your PR, do make sure to complete the following boxes.
doc/whatsnew/<current release.rst>
.Type of Changes
Description
This fixes the false positives identified in #2818, #3249, #3608 & #4346
All false positives reported fell under keywords before walrus operator or if-keyword within generators/comprehension.
This closes #2818, closes #3429, closes #3608, closes #4346
#3249 reported a lot of different false positives, numerous of which also included code that is not valid python code.
I think this should fix all false positives within valid code, but before closing that issue somebody might need to run through the issue again to see if all reports have been taken care of.