Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Depth-integrated momentum budget diagnostics #1343

Merged
merged 10 commits into from
Mar 3, 2021

Conversation

hmkhatri
Copy link
Contributor

Building on depth-averaged momentum budget diagnostics, I have added diagnostics for depth-integrated momentum budget terms. These new diagnostics are required for analyzing the depth-integrated vorticity budget. Some of the terms in the depth-integrated vorticity budget are sensitive and offline calculation of these diagnostics can result in significant errors. Hence, online diagnostic calculations are required for accurate analysis.

Also, typos in register_diag_field for some old diagnostics have been corrected.

@codecov
Copy link

codecov bot commented Feb 28, 2021

Codecov Report

Merging #1343 (41aa265) into dev/gfdl (a545274) will increase coverage by 0.07%.
The diff coverage is 100.00%.

Impacted file tree graph

@@             Coverage Diff              @@
##           dev/gfdl    #1343      +/-   ##
============================================
+ Coverage     45.75%   45.82%   +0.07%     
============================================
  Files           234      234              
  Lines         72556    72661     +105     
============================================
+ Hits          33195    33300     +105     
  Misses        39361    39361              
Impacted Files Coverage Δ
src/core/MOM_variables.F90 57.77% <ø> (ø)
src/core/MOM_CoriolisAdv.F90 67.57% <100.00%> (+2.10%) ⬆️
src/core/MOM_barotropic.F90 68.11% <100.00%> (+0.07%) ⬆️
src/core/MOM_dynamics_split_RK2.F90 89.73% <100.00%> (+0.79%) ⬆️
src/parameterizations/lateral/MOM_hor_visc.F90 66.05% <100.00%> (+0.45%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a545274...fbf0e91. Read the comment docs.

@hmkhatri
Copy link
Contributor Author

So, Expression verification / test-repro-and-dims did not pass, but the compilation and test runs worked fine. I am not sure how to correct this issue. Any help would be appreciated.

@marshallward
Copy link
Collaborator

The H and L dimensionality tests are failing (e.g. tc2.dim.l, tc2.dim.h), so you need to work out why the dimensions of these diagnostics are not correct.

Just quickly eyeing the code, the depth-integrated diagnostics probably need a factor of H. But it's probably better if you work through it more carefully.

I'd recommend trying to run the tests in .testing before pushing any new updates.

@marshallward
Copy link
Collaborator

It looks like it might just be an H vs L issue. Dimensionality is in a Pi-theorem sense (scaling invariance), rather than as an MKS unit.

@hmkhatri
Copy link
Contributor Author

hmkhatri commented Mar 1, 2021

@marshallward Based on your comments, I tried the following to resolve the H vs L dimensionality test issue,

In register_diag_field, I am now using conversion=US%Z_to_m*US%L_T2_to_m_s2 as the diagnostics have layer thicknesses multiplied (earlier I used conversion=US%L_T_to_m_s**2). However, this did not resolve the issue and dimensionality tests are still failing (see here )

So, I do not know how to rectify it.
Could someone please have a quick look and let me know if there is anything unusual?

@ashao
Copy link
Collaborator

ashao commented Mar 1, 2021

I agree with @marshallward , it's failing because your missing a thickness scaling converstion. I think you need something like:

L_T2_to_m_s2*Z_to_m

@marshallward
Copy link
Collaborator

H and Z are separate dimensions here, with H corresponding to thickness and Z being more positional (e.g. vertical gradients).

I will test myself, but you may have luck using H in place of Z.

@ashao
Copy link
Collaborator

ashao commented Mar 1, 2021

I think you're right, but would only be detectable in non-Boussinesq mode.

@ashao
Copy link
Collaborator

ashao commented Mar 1, 2021

So thinking something like?:

US%(L_T2_to_m_s2)*(GV%H_to_Z)*(US%Z_to_m)

@marshallward
Copy link
Collaborator

GV%H_to_m exists now, but I think it's relatively recent.

I haven't tested all the diagnostics yet, but using GV%H_to_m*US%L_T2_to_m_s2 seems to be working.

I find it a bit non-intuitive for the H scalings to be in GV though, we should probably discuss that someday (but we probably won't).

@hmkhatri
Copy link
Contributor Author

hmkhatri commented Mar 1, 2021

Thanks, @marshallward and @ashao

I have made corrections as per your suggestions and all checks passed.

@marshallward also suggested not to use allocatable arrays and just declare the arrays as a stack, e.g.
real, dimension(SZIB_(G),SZJ_(G)) :: intz_PFu

I remember there was a discussion on a similar point in #1163. With regards to performance and memory load, is there a preference to use allocatable vs stack arrays (for both 2D and 3D diagnostics)?

@ashao @Hallberg-NOAA Your inputs would be very helpful.

@@ -963,6 +1007,25 @@ subroutine step_MOM_dyn_split_RK2(u, v, h, tv, visc, Time_local, dt, forces, p_s
! enddo ; enddo ; enddo
! call post_data(CS%id_hf_v_BT_accel, hf_v_BT_accel, CS%diag)
!endif
if (CS%id_intz_u_BT_accel_2d > 0) then
allocate(intz_u_BT_accel_2d(G%IsdB:G%IedB,G%jsd:G%jed))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These shouldn't be allocated and deallocated each and every timestep

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pointing this out. I will the necessary changes.

@ashao
Copy link
Collaborator

ashao commented Mar 1, 2021

@hmkhatri When we talked about this originally (I believe with @Hallberg-NOAA), the goal was to make sure that we don't get into a position where we're adding a bunch of diagnostic-only arrays to the stack. @marshallward correct me if I'm wrong, but hypothetically an unused heap array doesn't actually add to the memory footprint because it is only allocated if it's actually used. For example if you did something like

real, dimension(10,10) :: foo
return

the memory for foo is never allocated or used, but if you did something like

real, dimension(10,10) :: foo
foo(:,:) = 0.

then it would be.

Performance-wise, I would guess that what @marshallward is objecting to is the allocate and deallocate around the post_data. That certainly bog down the model since it does take a non-negligible amount of time to do those operations. As you did earlier, where it's an allocatable array inside of the CS, and it's allocated during model initialization, that's a negligible performance hit.

@marshallward
Copy link
Collaborator

Yes, I agree with what @ashao wrote. If an array is declared on stack but never used, then it should not use any memory. ("Should" is the key word here, but that is how it's supposed to work.) There is also the cost of repeated alloc/dealloc per timestep to consider. And if there is any benefit to allocation, a good Fortran compiler is just going to do it behind the scenes anyway.

I think our policy here (if I can dare to call it that) is not yet clear though, so perhaps @Hallberg-NOAA should weigh in.

@hmkhatri
Copy link
Contributor Author

hmkhatri commented Mar 1, 2021

@ashao @marshallward Thanks for the comments. I agree that I should not have used allocation and deallocation in each timestep. This was my fault. I will correct it, and move memory allocation calls to initialization subroutines.

In the earlier pull request, I had initially defined pointers for diagnostic arrays in the CS control structure. For some reason, we wanted to keep the arrays used for the diagnostics local to the modules where they were being used. To have the combination of allocatable arrays and local variables, I ended up putting allocation and deallocation commands within the timestep loop. I agree that this is not ideal.

For now, I can either define allocatable arrays in the main control structure of the modules, so the arrays are allocated only once, but the arrays will not be local anymore. Or I could just define stack arrays locally. Let me know if there are better ways to do it.

@ashao
Copy link
Collaborator

ashao commented Mar 2, 2021

IIRC @hmkhatri, there was a couple of arrays that were being updated in one part of the dynamics, but unavailable in the other part of the code where you needed to post the data. In order to avoid making the call trees convoluted and doing unnecessary array copies, we (including with @Hallberg-NOAA ) decided to just assign a pointer array within the lower level CS.

Oh and no worries about the allocate/deallocate. At least you weren't doing 6 allocate/deallocates of 3D arrays like I was doing at one point.

@hmkhatri
Copy link
Contributor Author

hmkhatri commented Mar 2, 2021

@ashao thanks for the clarification. For now, I have just initialized 2D stack arrays to avoid allocate/deallocates. Does the PR require any changes?

@StephenGriffies
Copy link
Contributor

We had a meeting with @marshallward today and this pull request follows his recommendation. In that meeting, we concluded that the diagnostics in MOM6 follow a variety of styles for memory management. Marshall's suggestion is to define fixed sized arrays as part of the particular subroutine where the diagnostic is needed (on the "stack" I believe is the lingo). This approach contrasts to the allocate/deallocate done in the previous pull request (and as done for some other MOM6 diagnostics).

Standardization of MOM6 diagnostic memory management would be useful. But for this pull, we hope to be doing no harm and not to change anything that was already there. We are not trying to produce the standard.

@marshallward
Copy link
Collaborator

marshallward commented Mar 3, 2021

Gaea regression: https://gitlab.gfdl.noaa.gov/ogrp/MOM6/-/pipelines/12153 ✔️

Regression has passed (up to a parameter diagnostic change, as expected). This will require a MOM6-examples update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants