-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update for latest version of weather model (hash 2f1c8e1), add RRFS_NA_3km pre-defined domain, update timestep and MPI settings #492
Update for latest version of weather model (hash 2f1c8e1), add RRFS_NA_3km pre-defined domain, update timestep and MPI settings #492
Conversation
# 2) 1) with k_split=5/n_split=2, each time step takes ~15 seconds | ||
# 3) 2) with 32-bit build, each time step takes ~6 seconds | ||
|
||
NNODES_MAKE_ICS="12" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these settings specific to Hera? If we need to change these according to platform, we can put that kind of code in run_experiments.sh. That's where I've been doing platform-specific differentiation of settings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JeffBeck-NOAA Can better answer these questions since these changes came from him.
I think the problem is specific to the domain, which is very large and high-resolution, requiring more nodes for make_ics and make_lbcs for memory reasons. I assume that would mean this is required on all platforms.
I do think that these settings are less than ideal. Making PPN_MAKE_ICS and PPN_MAKE_LBCS lower means we will be under-utilizing the nodes.
We ddo need to have a separate conversation about making PPN_RUN_FCST (and maybe every PPN setting) platform specific in defaults. Right now we are under-utilizing nodes on most platforms using default settings. Maybe this can be rolled in to issue #452 since the OMP settings will have to be taken into account as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These settings are required for all platforms due to the domain size and resolution. The PPN_MAKE_ICS and PPN_MAKE_LBCS values are a specific chgres_cube requirement for large domains (need fewer processes but massive amounts of memory) due to an ESMF limitation/memory bug.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JeffBeck-NOAA Thanks for the clarification. In that case should these be handled somewhere in the generate workflow calling tree in order to have these settings as a default for this domain specifically? Maybe that's something best handled in an issue and follow-up PR in the future?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mkavulich, yeah, this is where things can get really thorny. Do we want to fill the generate script with nested if/case statements for different domains, resolutions, platforms for changes to PPN for individual tasks? I wasn't sure if we wanted to go that route or just have users source the specific WE2E config.sh file when they want to run this domain? I think this definitely deserves an issue and potential follow-up PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's likely some variability between platforms, but as a first approximation, chgres_cube needs more nodes and less cores for memory management for this domain across all platforms. Since there are no NNODES_* or PPN_* settings currently defined in the set_predef_grid_params.sh script, I left the changes in the WE2E test. The test is designed for Hera, but could be run on any platform.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gsketefian I was able to run this test on Cheyenne as well (but the forecast step did not complete due to wallclock time, so I can't be sure if it would have fully run successfully).
I agree that k_split and n_split should likely be handled by domain, however, currently they are also set according to the physics scheme, which complicates things. I believe they should also be configurable via config.sh for maximum flexibility. But again, that complicates things.
I can take this change out of the current PR...more discussion definitely needs to happen, the question is do we keep this change for now or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mkavulich, if the NA 3-km WE2E test runs with the old k/n_split settings, then lets stick with them and deal with domain-, physics-, and platform-dependent settings in a later PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On AWS for the RRFS work, we run chgres on a single node that has 768 GB of memory. I wonder if there is a way to specify large memory nodes if available, but will leave that to you
ush/templates/input.nml.FV3
Outdated
@@ -85,15 +85,15 @@ | |||
hord_vt = 6 | |||
hydrostatic = .false. | |||
io_layout = 1,1 | |||
k_split = 4 | |||
k_split = 5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the new k/n_split values are better (or at least not worse) for all grids? They are not at all grid-dependent, or maybe even suite-dependent?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JeffBeck-NOAA will be able to answer this better, but I did notice that the Graduate Student Test now takes longer to run, often hitting the wallclock limit (#490). This may be due to the different k_split and n_split settings, but I did not do a rigorous comparison between the current develop branch and my changes.
I think it is definitely necessary to address these concerns but I do not know enough about the inner workings of the dycore to have an educated opinion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gsketefian, yes these k/n_split values are better for all grids and are the recommended values from EMC. I'm not sure why the GST would take longer to run. Was #490 a problem before this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gsketefian, @mkavulich, I have confirmed that the n_split/k_split values that I received need to be flipped. So it should be k_split=2/n_split=5 and not k_split=5/n_split=2. This can go in a future PR, but @mkavulich, this is likely why your GST run is slower. If we want, we can keep the previous values for now (k_split=4/n_split=5).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'd rather fix it here if this is what needs to go in. I'll swap those values and see what the result is. Assuming it is not slower I'll apply those changes to this PR.
I am actually not sure if #490 was a specific problem before making these changes but I did notice that my unmodified runs were quite close to reaching the wallclock limit (within a few minutes).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't see how k_split=5,n_split=2 can be faster than k_split=2,n_split=5. The is more expensive computation done using the former (more calls done with k_split=5)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JeffBeck-NOAA @JamesAbeles-NOAA Sorry, I just read my initial message and I agree it was super-unclear.
The new test (k_split=2 and n_split=5) is the faster one, approximately 20% faster than (k_split=5 and n_split=2). So it sounds like this is good and expected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mkavulich, Ah! That makes much more sense. I also couldn't understand how more vertical remappings was resulting in a faster run. In that case, are we ready for a merge with those new settings?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I pinged Gerard to make sure I had addressed all of his concerns before merging. I'm also hoping to finish up some follow-up tests with the old settings so I can be sure I have the right figures in the commit message, but if that doesn't finish before Gerard gets back to me I'll just merge it anyway, this PR needs to get in.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My original concern was that with the new k_split/n_split values, although we're improving the computational efficiency (and maybe also the forecast quality?) for one grid (e.g. CONUS 3km), we might be degrading the forecast quality for another and not know it -- maybe the Alaska grids where the vertical mapping may have to be done more frequently because of the steep terrain. This is probably beyond the scope of this PR, but we should keep it in mind. Maybe the bigger question is whether k_split and n_split should be the same for all grids or should they be grid-dependent.
…l (hash 2f1c8e1) (#140) ## DESCRIPTION OF CHANGES: This PR (along with https://github.com/NOAA-EMC/regional_workflow/pull/492) update the UFS SRW App to work with a more up-to-date hash of the ufs-weather-model (ufs-community/ufs-weather-model@2f1c8e1) ## TESTS CONDUCTED: Ran full suite of tests on Hera (aside from nco tests) with updated environment files (/scratch2/BMC/det/kavulich/workdir/update_app_master/step-by-step/expt_dirs/). The only new failures are only for older versions of NAM input; this is due to a change in the weather model, and will need to be handled in a separate PR. Ran several end-to-end tests on Cheyenne (Intel 19.1.1) and Jet. Also ran the Graduate Student Test case on Orion. No failures outside of those outlined above and a few previously-known failures. **Tests have not been run on WCOSS platforms; these will likely fail without being updated to the latest ESMF modules but DTC does not have access to update and test** ## ISSUE: Fixes issue #134
DESCRIPTION OF CHANGES:
This PR accomplishes three things:
Note that the changes for the new RRFS_NA_3km domain and default decomposition/CPU settings were originally proposed by @JeffBeck-NOAA in PR #480, however since that PR requires the updates to the weather model in order to be tested with the intended physics suite, I have incorporated those changes into mine. As Jeff is on vacation right now I hope he won't mind me stealing his thunder :)
This PR will need to be accompanied by changes in the ufs-srweather-app for updating the weather model hash and incorporating some necessary build changes (including compiling with 32-bit reals by default); this PR has been created (ufs-community/ufs-srweather-app#140) but it still a draft pending some platform-specific fixes and the merger of this PR.
TESTS CONDUCTED:
For the initial changes for PR #480, multiple tests on Hera were run, including a full 36-hr forecast here: /scratch2/BMC/det/beck/FV3-LAM/expt_dirs/test_RRFS_NA_3km_36hr
With the additional changes and updates to the weather model, and updates to the Hera environment file, all end-to-end tests (aside from nco tests) were run on Hera (intel). There were a few pre-existing failures, and aside from an occasional GST failure due to wallclock time issues (see #490) the only new failures were for grid_RRFS_CONUS_25km_ics_NAM_lbcs_NAM_suite_GSD_SAR, grid_RRFS_CONUS_25km_ics_NAM_lbcs_NAM_suite_HRRR, and grid_RRFS_CONUS_25km_ics_NAM_lbcs_NAM_suite_RRFS_v1beta, which all had a new failure in make_ics and make_lbcs. Currently investigating this issue, though it is almost certainly related to the build environments which need to be addressed in ufs-community/ufs-srweather-app#140
Tests are ongoing on Cheyenne, Jet and Orion; will update with the results once those are complete. However, I believe the Hera tests are sufficient for this PR, as any failures specific to those platforms need to be addressed at the app level.
CONTRIBUTORS:
@JeffBeck-NOAA authored the half of these changes originating from #480, and offered the following credits on his original PR:
Thanks are due to @JamesAbeles-NOAA for his recommendations for build/namelist changes and help troubleshooting run times. Thanks to @BenjaminBlake-NOAA and @JacobCarley-NOAA for their help with the domain configuration.