Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add WW3 updates to CESM NUOPC driver #717

Closed
wants to merge 8 commits into from

Conversation

dabail10
Copy link
Contributor

For detailed information about submitting Pull Requests (PRs) to the CICE-Consortium,
please refer to: https://github.com/CICE-Consortium/About-Us/wiki/Resource-Index#information-for-developers

PR checklist

  • Short (1 sentence) summary of your PR:
    Add updates for CESM NUOPC driver. Needed for coupling to WW3 and single column CAM.
  • Developer(s):
    dabail10 (D. Bailey)
  • Suggest PR reviewers from list in the column to the right.
  • Please copy the PR test results link or provide a summary of testing completed below.
    Ran aux_cice testsuite within CESM.
  • How much do the PR code changes differ from the unmodified code?
    • bit for bit
    • different at roundoff level
    • more substantial
  • Does this PR create or have dependencies on Icepack or any other models?
    • Yes
    • No
  • Does this PR add any new test cases?
    • Yes
    • No
  • Is the documentation being updated? ("Documentation" includes information on the wiki or in the .rst files from doc/source/, which are used to create the online technical docs at https://readthedocs.org/projects/cice-consortium-cice/. A test build of the technical docs will be performed as part of the PR testing.)
    • Yes
    • No, does the documentation need to be updated at a later time?
      • Yes
      • No
  • Please provide any additional information or relevant details below:
    These are the initial set of changes from CESM. These are only in the NUOPC driver and do not change CICE standalone.

@@ -913,6 +943,26 @@ subroutine ice_export( exportState, rc )
! ice fraction
ailohi(i,j,iblk) = min(aice(i,j,iblk), c1)

! ice thickness (m)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A comment about a preceding line (933) containing the OMP directive. Do additional variables need to be made private for the added floediam calculation?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. Looks like k and floe_rad_c need to be added to the OMP PRIVATE statement. Anything else you noticed?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had the wave-ice coupling working in a branch, but I need to make sure it works if we don't have wave-coupling activated.

I also noticed the change at LN 802 in import export---is this related to the c-grid development?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes! The C-grid has been merged onto main.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, just found out that this code for the FSD needs to be in an if block. Probably the set_stateexport and so on needs to check for the fields being asked for as well.

!call grid_average_X2Y('F',work,'T',ss_tltx,'U')
!work = ss_tlty
!call grid_average_X2Y('F',work,'T',ss_tlty,'U')
call t2ugrid_vector(uocn)
Copy link
Contributor

@DeniseWorthen DeniseWorthen May 12, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm getting a compile error for t2ugrid_vector, it has been removed from cicecore/cicedynB/infrastructure/ice_grid.F90 and replaced by the grid_averageX2Y interface it seems.

I'm not sure I understand the @apcraig comment; maybe this entire block should be removed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

uocn, vocn, ss_tltx, ss_tlty are now assumed to be on the U grid and the averaging to the T grid is being done in CICE. It no longer needs to be done here. We are moving to a more general system in CICE where the location of the coupling/forcing variables are more clearly defined within CICE. This should be carefully reviewed in the context of CESM coupling to make sure it's working properly.

@dabail10
Copy link
Contributor Author

Good catch. I hadn't actually tested this cap with the latest C-grid changes. I will fix this.

@dabail10
Copy link
Contributor Author

This actually makes life easier, so we don't have to worry about the grid internally in CICE. However, this is double averaging ... I think there is always going to be averaging whatever we do. Eventually, the fields should come through the coupler on the C-grid and we should not need to interpolate at all.

@apcraig
Copy link
Contributor

apcraig commented May 12, 2022

This actually makes life easier, so we don't have to worry about the grid internally in CICE. However, this is double averaging ... I think there is always going to be averaging whatever we do. Eventually, the fields should come through the coupler on the C-grid and we should not need to interpolate at all.

You should not have the X2Y calls in the cap anymore. As you say, you will double average and the results will be incorrect. The way the implementation works now is that you set grid_ocn to A, B, or C and this tells CICE what grid the T and U variables are on. CICE then will interpolate forcing/coupling variables to whatever grid is needed internally. So for instance, if uocn is on the T grid, set grid_ocn="A" and do nothing in the cap. If uocn is on the U grid, set grid_ocn="B" and do nothing in the cap. If uocn is on the East face, set grid_ocn="C" and do nothing in the cap. Whether CICE is running on the B or C grid, it will interpolate grid_ocn to grid_ice correctly internally. Some of this hasn't been fully tested yet (i.e. coupling variables on the B or C grid), so we'll have to validate as we migrate to that capability, but it means there should be no more grid averaging in the caps. The coupling/forcing field locations just has to be consistent with A, B, or C staggers for everything to work.

@DeniseWorthen
Copy link
Contributor

DeniseWorthen commented May 12, 2022

So if the MOM6 cap was modified to export uocn, vocn on the C-grid (vs what it does now, which is to move everything to the A-grid), CICE would now accommodate those velocities on the native C-grid. That's great. There is also a rotation (and back) to true directions (vs I-J directions) and as long as MOM6 and CICE6 were on the same grid, that also would not be needed.

@apcraig
Copy link
Contributor

apcraig commented May 12, 2022

@DeniseWorthen, that's the idea. We have to validate when we start doing it. You do have to be careful about the rotations. In CESM, the standard is E/N. If you don't rotate to E/N, then you risk running into problems if the ice model is not on the same grid. You also almost certainly run into problems if the ocean currents are used in another component, say the atm model. There are many options including sending out both rotated and non-rotated versions of the fields for use in different places. Or maybe we need to set some rotation information either via namelist or coupling flags as part of controlling this stuff. TBD.

@dabail10
Copy link
Contributor Author

We will likely need some flags here to account for rotation. Actually if the ANGLE array is all zeroes say, then we would have effectively no rotation. We would be doing an extra calculation for nothing.

@apcraig
Copy link
Contributor

apcraig commented May 13, 2022

@dabail10, this all looks fine to me. When you have finished your testing and mods, could you make a note of that here, and I'll take a final look and approve. Thanks.

@DeniseWorthen
Copy link
Contributor

DeniseWorthen commented May 13, 2022

I needed a couple of more changes to get it to compile. The tr_fsd needed to be in ice_export, not ice_import and I needed to query tr_fsd as a flag.

diff --git a/cicecore/drivers/nuopc/cmeps/CICE_RunMod.F90 b/cicecore/drivers/nuopc/cmeps/CICE_RunMod.F90
index 6f145ab..79066e8 100644
--- a/cicecore/drivers/nuopc/cmeps/CICE_RunMod.F90
+++ b/cicecore/drivers/nuopc/cmeps/CICE_RunMod.F90
@@ -129,7 +129,7 @@ subroutine ice_step
       use ice_restoring, only: restore_ice, ice_HaloRestore
       use ice_step_mod, only: prep_radiation, step_therm1, step_therm2, &
           update_state, step_dyn_horiz, step_dyn_ridge, step_radiation, &
-          biogeochemistry, save_init, step_dyn_wave, step_snow
+          biogeochemistry, step_prep, step_dyn_wave, step_snow
       use ice_timers, only: ice_timer_start, ice_timer_stop, &
           timer_diags, timer_column, timer_thermo, timer_bound, &
           timer_hist, timer_readwrite
diff --git a/cicecore/drivers/nuopc/cmeps/ice_import_export.F90 b/cicecore/drivers/nuopc/cmeps/ice_import_export.F90
index 906f216..767f85f 100644
--- a/cicecore/drivers/nuopc/cmeps/ice_import_export.F90
+++ b/cicecore/drivers/nuopc/cmeps/ice_import_export.F90
@@ -428,7 +428,6 @@ subroutine ice_import( importState, rc )
     real (kind=dbl_kind), pointer    :: dataptr2d_dstdry(:,:)
     character(len=char_len)          :: tfrz_option
     integer(int_kind)                :: ktherm
-    logical(log_kind)                :: tr_fsd
     character(len=*),   parameter    :: subname = 'ice_import'
     character(len=1024)              :: msgString
     !-----------------------------------------------------
@@ -436,7 +435,6 @@ subroutine ice_import( importState, rc )
     call icepack_query_parameters(Tffresh_out=Tffresh)
     call icepack_query_parameters(tfrz_option_out=tfrz_option)
     call icepack_query_parameters(ktherm_out=ktherm)
-    call icepack_query_parameters(tr_fsd_out=tr_fsd)

     if (io_dbug > 5) then
        write(msgString,'(A,i8)')trim(subname)//' tfrz_option = ' &
@@ -885,6 +883,7 @@ subroutine ice_export( exportState, rc )
     real    (kind=dbl_kind) :: floediam(nx_block,ny_block,max_blocks)
     real    (kind=dbl_kind) :: floethick(nx_block,ny_block,max_blocks) ! ice thickness
     real    (kind=dbl_kind) :: Tffresh
+    logical(log_kind)       :: tr_fsd
     integer (kind=int_kind) :: nt_fsd
     real    (kind=dbl_kind), allocatable :: tempfld(:,:,:)
     real    (kind=dbl_kind), pointer :: dataptr_ifrac_n(:,:)
@@ -904,6 +903,7 @@ subroutine ice_export( exportState, rc )
     !       tr_zaero_out=tr_zaero, tr_bgc_Nit_out=tr_bgc_Nit)

     call icepack_query_tracer_indices(nt_fsd_out=nt_fsd)
+    call icepack_query_tracer_flags(tr_fsd_out=tr_fsd)

     call icepack_warnings_flush(nu_diag)
     if (icepack_warnings_aborted()) call abort_ice(error_message=subname, &

@dabail10
Copy link
Contributor Author

Great. Thanks. I will hopefully get to test this out with the head of Consortium main soon.

@DeniseWorthen
Copy link
Contributor

I'm testing now w/ the wave coupling off and all baselines are changing. I expected them to be B4B, so I need to check this out some more.

@dabail10
Copy link
Contributor Author

Curious. Cheyenne is back up, but I won't be able to look at this until Monday. The last version of the code I had was from December. I think there were some answer changes on Consortium main. The threading in particular ... Tony can confirm or deny this.

@apcraig
Copy link
Contributor

apcraig commented May 13, 2022

I'm pretty sure the threading didn't change results, but maybe something else did. It could also be the changes from the C-grid. We tried to track how the C-grid changes would affect coupled models, but we didn't do any testing. @dabail10, if you tell me what "December" version of CICE is currently in CESM, I can review changes since then and see if anything jumps out.

@dabail10
Copy link
Contributor Author

I was up to date with the Dec 17th hash, dfc6a11.

Dave

@dabail10
Copy link
Contributor Author

There were a couple bug fixes in the OMP stuff and variables that were not private. Maybe these did not impact the answers.

@DeniseWorthen
Copy link
Contributor

DeniseWorthen commented May 13, 2022

Thanks, I updated NOAA-EMC just before the C-grid merge, which brought me up 8d7314f. At that point all baselines were B4B w/ our previous baselines.

@apcraig
Copy link
Contributor

apcraig commented May 13, 2022

12 minutes ago

Thanks @DeniseWorthen, it sounds like you and @dabail10 are not synced up beyond maybe the cap. Whatever answer changes you're seeing must be coming from from C-grid and/or cap issues.

It might be worthwhile for @dabail10 to update CICE to before the C-grid merge and verify results, cap should not have to change. Then update to C-grid merge and update cap.

CICE standalone should be bit-for-bit with the C-grid merge. But, the coupled system may not be. Some of the averaging was moved out of the forcing/coupling code and into the dynamics. This was done in a bit-for-bit way for standalone, but if the averaging was being done slightly differently in the cap than in the standlone forcing, this could result in answer changes.

I think this testing needs to be done carefully. Let me know if you want me to help. I'm willing to jump into CESM a bit and contribute.

@DeniseWorthen
Copy link
Contributor

DeniseWorthen commented May 16, 2022

Thanks for the clarification for @apcraig. I did the update just prior to the Cgrid merge just so I'd be able to test in the way you suggest. What I have done is created an addCgrid branch and merged the latest main into it (I had one minor compile fix, for save_init). Then I ran our current develop branch and the Cgrid branch and compared the difference in the mediator history files for the ice at the end of the first ice timestep. I'm seeing some anomalies along the seam which may be of concern (these have a factor of 1e14 applied). The largest of these are the differences in taux,tauy exported by ice.

Screen Shot 2022-05-16 at 9 34 11 AM

Screen Shot 2022-05-16 at 9 35 04 AM

.

@apcraig
Copy link
Contributor

apcraig commented May 16, 2022

@DeniseWorthen. Thanks for doing this. Does 1e14 implies the differences are in the 14th digit? I have a couple ideas to see if we can figure out where the difference comes in. The first test would be to change cicecore/cicedynB/dynamics/ice_dyn_evp.F90 at about line 321, change the 'S' in these lines to an 'F'.

      call grid_average_X2Y('S', uocn     , grid_ocn_dynu, uocnU   , 'U')
      call grid_average_X2Y('S', vocn     , grid_ocn_dynv, vocnU   , 'U')
      call grid_average_X2Y('S', ss_tltx  , grid_ocn_dynu, ss_tltxU, 'U')
      call grid_average_X2Y('S', ss_tlty  , grid_ocn_dynv, ss_tltyU, 'U')

Does that bring the solutions closer together or make them the same?

@DeniseWorthen
Copy link
Contributor

DeniseWorthen commented May 16, 2022

Yes, the differences are O~10-14 along the few rows at the seam; outside of that they drop by an order of magnitude.

There was no significant difference using 'F' instead of 'S' comparing against the current develop branch.

I need to find some time to dig into this too. I'd like to either look at the internal cice fields w/in the cap, or else the cice history fields written at each timestep.

@apcraig
Copy link
Contributor

apcraig commented May 16, 2022

That's too bad it didn't change answers much. Were "S" and "F" bit-for-bit? I assume not. Just confirming, are you using evp dynamics?

You could try the following. In cicecore/cicedynB/dynamics/ice_dyn_evp.F90 at about line 321, change

  call grid_average_X2Y('S', uocn     , grid_ocn_dynu, uocnU   , 'U')
  call grid_average_X2Y('S', vocn     , grid_ocn_dynv, vocnU   , 'U')
  call grid_average_X2Y('S', ss_tltx  , grid_ocn_dynu, ss_tltxU, 'U')
  call grid_average_X2Y('S', ss_tlty  , grid_ocn_dynv, ss_tltyU, 'U')

to

!      call grid_average_X2Y('S', uocn     , grid_ocn_dynu, uocnU   , 'U')
!      call grid_average_X2Y('S', vocn     , grid_ocn_dynv, vocnU   , 'U')
!      call grid_average_X2Y('S', ss_tltx  , grid_ocn_dynu, ss_tltxU, 'U')
!      call grid_average_X2Y('S', ss_tlty  , grid_ocn_dynv, ss_tltyU, 'U')
      uocnU = uocn
      vocnU = vocn
      ss_tltxU = ss_tltx
      ss_tltyU = ss_tlty

and in cicecore/drivers/nuopc/cmeps/ice_import_export.F90, uncomment at about line 808

       work = uocn                                                                                                                                           
       call grid_average_X2Y('F',work,'T',uocn,'U')                                                                                                          
       work = vocn                                                                                                                                           
       call grid_average_X2Y('F',work,'T',vocn,'U')                                                                                                          
       work = ss_tltx                                                                                                                                        
       call grid_average_X2Y('F',work,'T',ss_tltx,'U')                                                                                                       
       work = ss_tlty                                                                                                                                        
       call grid_average_X2Y('F',work,'T',ss_tlty,'U')                                                                                                       

If you can compare that against your 3 prior runs (pre-cgrid, cgrid, cgrid+mod1), that would be great. I think this should be identical to cgrid+mod1. If it isn't, that would be interesting to know. And if it isn't bit-for-bit with pre-cgrid, then we'll have to dig more.

@DeniseWorthen
Copy link
Contributor

OK, great I'll try those cases.

The mod1 ("F") was not B4B with "S".

Our EVP settings are

&dynamics_nml
    kdyn            = 1
    ndte            = 120
    revised_evp     = .false.
    evp_algorithm   = 'standard_2d'

@dabail10
Copy link
Contributor Author

This is great debugging. Thanks for this Denise. I will try to get up and running with CESM2 and the latest CICE today.

@dabail10
Copy link
Contributor Author

Do we test the PIO interfaces? Anyhow, the function query_field in io_pio2/ice_restart.F90 has a USE_NETCDF CPP. I don't think this should be there.

@dabail10
Copy link
Contributor Author

Ok. I added the changes to get the CMEPS/NUOPC cap to compile. I will try a run now.

@dabail10
Copy link
Contributor Author

The DEBUG test is just hanging on cheyenne. I am trying to turn off threading.

@DeniseWorthen
Copy link
Contributor

@apcraig I'll call your second set of changes "mod2". They test B4B with mod1 as you expected. But, they are not B4B with the base cgrid case or the pre-cgrid. The same pattern appears as we got initially.

Just FYI, this test case for us starts with the ocean at rest, so uocn,vocn=0. However, the surface tilts are non-zero.

I've set up on Cheyenne now, and I'll verify my previous results.

@apcraig
Copy link
Contributor

apcraig commented May 17, 2022

Thanks @DeniseWorthen, results as expected. That's sort of good.

I think the next step is more detailed debugging. I think the right approach is to update to just before the C-grid merge, then be able to do side by side testing between that and a version that has updated to the C-grid merge, as @DeniseWorthen has done. We have 3 models we could do this with, UFS, CESM, and RASM, that include two different caps. I will probably have to do the same testing with RASM. Between the three of us, what's the best way to proceed? Could we have a brief call today to chat about strategy and timeline? I'll send an email to organize.

@apcraig
Copy link
Contributor

apcraig commented Jun 3, 2022

The changes to io_pio2/ice_restart.F90 were commited in #726 and create a conflict. Sorry about that.

@DeniseWorthen
Copy link
Contributor

@dabail10 I'd like to get the wave-ice coupling changes merged. Have you had a chance to work on this? I have a branch I've been using so I'd be happy to make the PR if that takes the load off you.

@dabail10
Copy link
Contributor Author

I have not. We have been waiting on feedback from Lettie and some additional WW related cap issues. I am in Madison at the AMSPolar right now, but could perhaps take a look at this next week.

@eclare108213
Copy link
Contributor

What is the status of this PR? Are there still issues that need to be debugged?

@dabail10
Copy link
Contributor Author

We were finding other C-grid related issues that we were dealing with first. I think @DeniseWorthen had something with the waves here?

@DeniseWorthen
Copy link
Contributor

I apologize for missing this conversation for an entire month. I did merge the changes for wave-ice coupling to the nuopc cap in emc/develop. At the time, we were ahead of escomp since I had also merged the c-grid changes to the cap. I haven't merged in the most recent changes from the consortium though (since Aug, PR #752). @dabail10 , what is the best course here? Should I make a PR to ESCOMP or to Consortium/main?

@dabail10
Copy link
Contributor Author

dabail10 commented Nov 2, 2022

I think it would be great if you could make the changes in the Consortium. I have a version in ESCOMP that has a number of new changes to the cap for CESM. I'd like to make sure to get your changes in.

@DeniseWorthen
Copy link
Contributor

Ok, thanks. I'll prepare a PR.

@dabail10
Copy link
Contributor Author

dabail10 commented Nov 4, 2022

This has been replaced by #782

@dabail10 dabail10 closed this Nov 4, 2022
@dabail10 dabail10 deleted the nuopc_new branch November 4, 2022 15:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants