Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Review Code with Jason webinar (for testing) and make a list of refactor suggestions #149

Closed
Eduardo06sp opened this issue May 1, 2023 · 18 comments
Assignees

Comments

@Eduardo06sp
Copy link
Contributor

https://www.youtube.com/watch?v=AD5Sg-CKgww

Similar to #121. I think Jason provided very valuable feedback and some of those changes could definitely be incorporated into this project.

I am very interested, so I will keep a lookout on this issue over the next 7 days. If no one takes it, I will request to be assigned to it.

I would love to give someone a chance to work on this before me. :}

@kasugaijin
Copy link
Owner

Awesome, thanks @Eduardo06sp

@Eduardo06sp
Copy link
Contributor Author

I'd like to be assigned to this!

So just to clarify, the first step is to create a list of things that need to be refactored. As I have time, I'll watch through the video and create the list, then go from there!

@kasugaijin
Copy link
Owner

kasugaijin commented May 10, 2023

Yes I think that's a good high level approach. I am not sure how much work this might end up being as it could infer some more fundamental design approaches that implicate all or many of the test files. So, maybe it would be good to develop a couple bullets on testing design principals that could be added to the Readme, and the next step could be to create a file by file summary of changes (as many as you feel like taking on, one or many, no pressure). Then we can create an issue for each file that outlines a summary of changes. What are your thoughts on this?

@Eduardo06sp
Copy link
Contributor Author

Eduardo06sp commented May 10, 2023

I was thinking this could turn into a "Test Refactoring (Jason Swett)" project, and the issues could be grouped as one project on this page:
https://github.com/kasugaijin/baja-pet-rescue/projects?query=is%3Aopen

The same approach can be used for the other issue (#121), as it may be easier to track progress like this. I don't think just anyone can make a new project however.

I'm not sure what I would be adding to the README. I'll definitely start by making the overview/list then let you know here when I'm done!

@kasugaijin
Copy link
Owner

Ok tracking on a project page sounds good: https://github.com/users/kasugaijin/projects/4/views/1
AS far as the README goes, I am just thinking there are probably some core test writing principals we can draw from this refactoring work, and these can be included in the Readme for future reference. That's a separate piece from tracking the work.
Hopefully you can add to the project link above?

@kasugaijin
Copy link
Owner

@Eduardo06sp the project linked above should now be available. I just realised it was private and set it to public. https://github.com/users/kasugaijin/projects/4

@Eduardo06sp
Copy link
Contributor Author

I don't have the ability to add issues there.

However, it's not a major issue honestly. I will work on the list then just create new issues as needed. I didn't want to add more work than is necessary!

@Eduardo06sp
Copy link
Contributor Author

Eduardo06sp commented May 18, 2023

General Points

  • Make test files based on FEATURES, not models (especially with feature/integration tests)

    • E.g. a lot of people have a "people" model, then have a test file named "people_tests", etc. etc.
  • Michael pitched in, said have minimum lines of code possible, only what's necessary

    • E.g. some tests have an unnecessary amount of params that are unrelated to what is being tested
  • Change any very specific selectors to more general ones, e.g. change assert_select 'div.col-lg-4' to something more general

    • Michael pitched in, noticed a bunch of "div" assert_selects, so if page layout changes, tests will immediately break

test/integration/org_dogs_test.rb

  • Around line 15 "sign_in users(:user_one)"

    • Fixtures should have more descriptive labels, e.g. users(:user_one) doesn't tell you anything about what its significance is
      • if the point of the fixture is that it is an adopter user, a better name would be users(:adopter_user) maybe
  • Around line 112 "verified staff can pause dog and pause reason is selected in dropdown"

    • Test description should clearly describe the test functionality

    • Break into 2 separate tests: "verified staff can pause dog", then separate test for "in dropdown, 'pause reason' is selected"

    • There are lots of params here, simplify / extract them

      • Check if ALL of those params are even necessary to for the tests
      • They MAY be necessary, but this is mixing levels of abstraction
        • E.g. system test interacts with interface, "high-level", while you reach for the database in your tests, "low-level"
        • Keep system tests limited to what APPEARS on the page, instead of what appears in the db, e.g. assert_equal @dog.pause_reason should not be there
    • He scrolled down and seems like it's more of the same issues

  • Around line 189, separate "verified staff can upload multiple images and delete one of the images" into 2 tests

    • It's okay if you have to upload the images again
    • Might be nice to add a setup method to upload images
    • Abstract irrelevant parameters away
      • E.g. dog age has nothing to do with the test
      • Alternatively, use a helper method to move all the params away from the test (if the all the params are necessary)
  • Around line 271 "verified staff accessing org dogs index without selection param see all unadopted dogs"

    • Jason would do this a lot differently
      • he would (pseudo-code):
        * sign_in :verified_staff_user
        * create unadopted dog named "Mike"
        * get /dogs
        * assert_includes response.body, "Mike"
        
    • You can break this test out into its own file
      • You could add setup method where it creates the dog for the test
        • May be able to just use a fixture instead
    • Also test for a scenario where body SHOULD NOT include an unadopted dog
      • It's important to test this, to ensure test isn't always just passing automatically
  • He scrolled down, said the rest of his advice would probably be similar

app/controllers/successes_controller/google_maps_coordinates.rb

  • He might rename the class to be more descriptive
    • E.g. he may rename it to OverlappingCoordinatesSet

app/controllers/successes_controller.rb

  • He rewrote it to:
    def index
      @coordinates = Adoption.all.map do |adoption|
        @coordinates << disambiguate({
          latitude: adoption.adopter_account.adopter_profile_location.latitude,
          longitude: adoption.adopter_account.adopter_profile.location.longitude
        })
      end
    end
    
    def disambiguate #or fuzzify/whatever else
    end
    • The idea is, cycle through all coordinates, then "disambiguate" each one

      • If coordinate is unique, return / don't do anything
      • If coordinate is not unique, add small random number to lat and/or long, and return that
    • Alternative approaches:

      • Someone pitched in: since all locations are fake, maybe you don't have to track any of this at all, and you could just make fake locations
        • In this case, change above code to "fuzzify" method, then just add method that adds random number(s) to lat/long
      • Someone else pitched in: you can use Google Maps API to aggregate overlapping pins
        • It will display a pin with a number (indicating the number of pins that overlap at a given location)

@Eduardo06sp
Copy link
Contributor Author

Eduardo06sp commented May 18, 2023

There was A LOT to unpack from the video, but here is a revised draft of my notes. The structure of my notes essentially follows the video chronologically (except the general points at the very top).

Anywhere you see a checkbox, it indicates something that can directly be addressed. All other bullet points are things that I found important or could potentially become a change.

I found it difficult to really point to any specific part where I could say "this is what needs to change in this file", especially since some of the points are more broad & apply to the entire test suite.


So right off the bat, we could just work on the file that Jason focused on for a large part of the video, test/integration/org_dogs_test.rb.

Alternative tasks/issues could be focused on the stuff under "General Points," which will require reviewing the rest of the test files and checking what is applicable.

@kasugaijin
Copy link
Owner

kasugaijin commented May 19, 2023

Awesome work @Eduardo06sp I will have time to review your notes above this weekend. I agree on your suggestion to start with the org dogs test file. How do you want to approach this? Perhaps refactor a test and submit a PR then we can review together? After a couple iterations of that the chunks of work could get bigger if desired? I'd like to keep the overhead low if possible, so a test at a time would help me at first.

@Eduardo06sp
Copy link
Contributor Author

Eduardo06sp commented May 19, 2023

@kasugaijin do you want me to just start with one of the tests then create a PR? skipping the part where I create issues altogether?

in that case, we can use this issue as the "mega-thread" for all the work, and each PR thread can contain a more specific review/discussion as needed

I wanted to create issues to make it easier to track what I'm assigned to & potentially have others take some if desired.


I honestly wouldn't mind chipping at it, then just creating PRs one at a time to simplify the whole process (with no Issues created). I would check off the list I made here as the work gets done.

I definitely would love to review any changes together. The video was really informative & we could ensure nothing gets missed.

@kasugaijin
Copy link
Owner

kasugaijin commented May 19, 2023

If that works for you I would say just chip away at it and make PRs as you go. Then we can review!.

Once we have a few down and if you're feeling like spreading the load we can make some issues then. I agree that the above can be used to track work. Perhaps we can add a new checkbox with specific description of the work completed each time you make a PR, or at least a link to the PR, and then we can check them off as we go to keep track of things? I am happy for that to be done after the fact instead of detailing everything up front (as we all know those things will be subject to change as work progresses).

@Eduardo06sp
Copy link
Contributor Author

Sounds good!

I'll start knocking them out as time allows, and just make PRs.

I'll broadly track my work accordingly. :}

@Eduardo06sp
Copy link
Contributor Author

Our conversation about using more general selectors throughout the tests:
#159 (comment)

Just dropping this here to keep track of things!

@Eduardo06sp
Copy link
Contributor Author

@kasugaijin Hey Ben, I apologize about the delay in communicating my next steps, I was a bit busy these past days.

I think the individual tests in test/integration/org_dogs_test.rb are fine in their current state. The last couple of tests, starting with line 235 (https://github.com/kasugaijin/baja-pet-rescue/blob/main/test/integration/org_dogs_test.rb#L235) involve testing what the result of filtering dogs (via parameters) are.

Jason Swett mentioned he would do those a lot differently (as noted in my long list of notes above). However, I think they are fine as is. If they get refactored, it would involve creating new Dogs of each type, then making sure they show up on the page. I think we can instead just leave the technique you used, which is checking the amount of elements against the total count for number of dogs displayed / being tested.

He also mentioned that we should keep system tests limited to what APPEARS on the page, instead of what appears in the db, e.g. assertions such as assert_equal @dog.pause_reason should not be there. This would include the tests towards the end (like I just mentioned). However, I did notice Jason refer to "system" tests a few times when making his point.

Since these are integration tests, it is to my understanding that "reaching" into the database is okay. I certainly agree with Jason when it comes to system tests, where you are essentially mimicking how a user navigates and "sees" your website. In system tests, there should be no mixing levels of abstraction.

One thing left is ensuring the test files are for features being tested, instead of giving the test files general names like it currently has. This is going to change the file structure for you integration tests a bit, so we can also leave the files as-is if you don't think that would be necessary.

Alternatively, I can start looking through the rest of the integration tests & see if anything needs some "simplification"/refactoring like I've done with test/integration/org_dogs_test.rb so far.

I think these would be the last two items for this issue, or last one if you want to disregard one of the suggestions.

@kasugaijin
Copy link
Owner

Thank you @Eduardo06sp no problem at all!

I agree with your assessments here. I think that the remaining refactors you mention above, while perhaps arguably the 'best' approach, don't really necessitate the effort and won't add much value right now. However, it's certainly a good exercise to identify what could be better, as you have done, so it is noted for the next time. For example, I definitely didn't put a ton of thought into the file names, but I am OK they remain as is for now. If we were to refactor this it would likely create a number more test files as I was creating test files by route rather than feature, and a route may have several features. From here on that should be the way it is handled.

As far as the db query within an integration test goes, I agree, this is acceptable. These are not expensive queries, either, so no need to invest time to consider other approaches for these.

Thanks again for the above and the PRs. This was a good exercise and I will refer to your notes periodically.

@Eduardo06sp
Copy link
Contributor Author

Eduardo06sp commented Jul 2, 2023

@kasugaijin No problem! Thank you for confirming some of the thoughts I had and sharing some of your own. So just to clarify, do you think we should close this Issue at this point?

I'm still working on my personal projects, but I am interested in contributing to this repo one way or another, whether it's this specific Issue or one of the other open ones.

I would like to know if I should start focusing my efforts on another open Issue instead, as time permits.

@kasugaijin
Copy link
Owner

Yes let's close this issue now! Though I will link to this issue in the Readme to serve as reference material for anyone working on tests in the future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants