-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
measure test coverage #17
Comments
The 8 existing unit_test codes pretty-much cover all of the major subprograms and functions that would be used by 99% of users, so yes I think it's adequate, and I'll certainly let you know if/when any major functionality is added which would necessitate additional unit_test codes. While any testing scheme could likely be improved, the subprograms which aren't directly called by the unit_test codes are lower-level routines which aren't intended to be directly called by users anyway. Furthermore, most of those subprograms are at least implicitly tested anyway whenever the unit_tests are run, since the user-callable subprograms often call such lower-level subprograms within the library, and those subprograms in turn call other deeper-down subprograms, etc. to do a lot of the underlying work. Bottom line, there are 300+ subprograms in the BUFRLIB, and I don't have the time or inclination to write a separate specific test case for every last one of them ;-) |
Awesome. We need to measure test coverage and quantify this, and identify areas for additional testing. |
Attached is the lastest test coverage, just run this morning... |
Thanks very much for this @edwardhartnett , but is there any way you could generate this from an Intel DA build+test? Two of the test codes (test_OUT_3.f and test_OUT_4.f) require a DA build of the library and are currently set up to only run on Intel. So I'm guessing maybe you generated this latest version from a different build+test (maybe on Gnu?), because I don't see your latest graph showing any coverage for some of the subprograms that I just recently added new tests for in test_OUT_4.f (?) Better yet, if you want to wait until after the new test_OUT_5.f is merged, that should show even better coverage. And that one will run under both Intel and Gnu, and for both DA and non-DA (i.e. static) builds. Out of curiosity, do you (or maybe @aerorahul ?) know why we're currently only running test_OUT_3.f and test_OUT_4.f for Intel builds? Is there some issue with dynamic allocation on Gnu that won't let us run those tests for that compiler? Again, those two tests do require a DA build of the library to be linked, but I would presume Gnu supports dynamic allocation just as well as Intel, doesn't it? |
Just out of sheer curiosity, I'm going to try pushing a new test branch up to GitHub to see if the test_OUT_3.f and test_OUT_4.f tests can be run under Gnu for the DA builds. I still don't understand why this would be a problem, and ideally I'd like to be able to have all of the test programs run whenever a new branch is pushed to the repository. If it works, then this would be big improvement over the existing test coverage. |
@jbathegit These were the test programs that you provided. I had added them to the build and made it part of the Github CI. I had sent an email asking for assistance on turning those tests ON, because they were failing. |
@jbathegit
|
Thanks for that reminder @aerorahul I'll take another look. |
So on the WCOSS I was able to compile and run test_OUT_3.f without any problems using GNU (gfortran and gcc) v4.8.5. I realize that's a lot lower version than the GNU v9+ that's running in the CI environment, but it's all I have on the WCOSS. I'll go ahead and try pushing it up and see what happens... |
I believe this has been all resolved and @jbathegit has a recent set of code coverage numbers. I can regenerate them periodically, as more tests are added, whenever needed. I will close this issue. |
What is the level of testing for this repo?
I see that there are tests and they are being run by github actions! Nice!
@jbathegit do you have a feel for how well this testing covers the code in this repo? Do we need more testing, and, if so, specifically what additional tests should we have?
The text was updated successfully, but these errors were encountered: