Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unit Tests are Stochastic #459

Open
biosafetylvl5 opened this issue Mar 22, 2024 · 3 comments · May be fixed by #754
Open

Unit Tests are Stochastic #459

biosafetylvl5 opened this issue Mar 22, 2024 · 3 comments · May be fixed by #754
Assignees
Labels
cli Command line interface tasks Priority Medium Medium priority issues testing Test datasets, scripts, and processes

Comments

@biosafetylvl5
Copy link
Collaborator

biosafetylvl5 commented Mar 22, 2024

Unit tests introduced in #444 are stochastic:

               for script_path in script_paths:
                    do_geoips_run = rand() < 0.15
                    test_data_found = False
                    if do_geoips_run:
                        # This script has been randomly selected. Check it's contents
                        # to make sure that the test data for the script actually exists
                        .... # do testing things

this increases run time performance. Which is very clever!

However, it is best practice that unit tests are repeatable, which this practice undermines.

I suggest using pytest marks instead to allow testers to choose to run the fast but random test suites or the slow but comprehensive tests. With marks, users could run pytest -m slow which would run all tests which are decorated with the @pytest.mark.slow decorator.

If this is agreed upon as a good idea, I can write out a plan for development.

@biosafetylvl5 biosafetylvl5 added testing Test datasets, scripts, and processes cli Command line interface tasks discussion required This issue requires discussion before implementation labels Mar 22, 2024
@jsolbrig
Copy link
Contributor

From talking with @evrose54, it seems that most of the stochastic tests are gone. The only ones remaining is for testing geoips get plugin and the geoips validate. Rather than using a stochastic test for these, I would like to trim down to a single test to start, then use pytest-cov to determine what code is not covered by that single test. Then add additional hard-coded tests to achieve better coverage.

To test coverage and print the lines that are not covered, use pytest --cov --cov-report term-missing. If you want to call pytest on specific tests, you can specify the file that contains the particular test so you don't have to wait as long when testing your specific unit tests: pytest --cov --cov-report term-missing ./tests/unit_tests/path_to_my_test_file.py.

@jsolbrig jsolbrig added Priority Low Low priority issues Priority Medium Medium priority issues labels Jun 14, 2024
@evrose54 evrose54 removed the Priority Low Low priority issues label Jun 14, 2024
@jsolbrig jsolbrig removed the discussion required This issue requires discussion before implementation label Jun 14, 2024
@biosafetylvl5
Copy link
Collaborator Author

Search for rand(*) to find tests that are still stochastic.

@biosafetylvl5
Copy link
Collaborator Author

biosafetylvl5 commented Aug 19, 2024

  • Refactor geoips/plugins/modules/output_checkers/image.py
    • instance 1
    • instance 2
    • instance 3
  • Refactor geoips/plugins/modules/output_checkers/netcdf.py
  • Refactor geoips/plugins/modules/output_checkers/text.py
  • Refactor tests/unit_tests/commandline/test_geoips_describe_plugin.py
    • instance 1
    • instance 2
  • Refactor tests/unit_tests/commandline/test_geoips_validate.py

@biosafetylvl5 biosafetylvl5 linked a pull request Aug 19, 2024 that will close this issue
12 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cli Command line interface tasks Priority Medium Medium priority issues testing Test datasets, scripts, and processes
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants