-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use Case Notebook for "Convective Parameters for Understanding Severe Thunderstorms" #9
Comments
That sounds like a good approach in general. A word of warning: the current implementation of CAPE in MetPy (which will be released in 0.6 probably tomorrow) works on individual profiles, so it might require some looping. Given your requirements, we may need to do some work to improve MetPy's CAPE calculation. |
My understanding is that for the fortran code is done on a grid by grid basis as well. The issue in fact is not only to compare the results, but to see how to speed it up. we shall see. |
@dopplershift : regarding CAPE and operations on profiles in general, you might be interested in checking out the latest python implementation of TEOS-10, the ocean thermodynamic equation of state. All the function were originally designed to operate on single columns (think CTD casts). The approach was to wrap the underlying C calls with numpy ufuncs, enabling efficient broadcasting to multidimensional arrays. Maybe a similar approach could be of use for our needs here. |
Thanks for the link. One important difference here is that MetPy has avoided needing to compile anything; my current hope is that applying some Numba may get us the performance increases without shipping compiled packages. We'll see... |
I should add that part of making things efficient with Numba is to eschew writing vectorized code and instead write the explicit loops. |
Just a heads-up that the lack of any use case notebooks is now blocking progress on our project. It would be great to get some basic use cases checked in. They don't have to be fancy! Any sort of semi-realistic workflow on real data is enough for the systems people to move forward with analyzing performance. |
@rabernat My go-to use case that I've used to evangelize to the atmospheric chemistry community is a timeseries analysis that can be found here. I can easily swap in a O(100 GB) multi-file NetCDF dataset instead of the binary dataset to fine-tune the example. Would that be a useful stop-gap while people develop more complex notebooks? |
@darothen: yes, we would love to have your use case. The catch is that it would be best to use data that is already stored on NCAR's Glade / Cheyenne filesystem. Are any of your datasets in the NCAR Research Data Archive? Do you have a Cheyenne account? If not, I can request an account for you. If so, I can add you to the pangeo project group. |
I still have Cheyenne access for an NSF-funded project which was extended through the end of the year; send me an e-mail and we can coordinate what account information you would need from me. Quickly skimmed the RDA and didn't quite find anything. But I have colleagues who run/ran CAM-Chem to compute things like surface ozone at global, hourly resolution for certain experiments and applications, and could reach out to ask for permission to use their dataset. Alternatively, if there is a data agreement with Copernicus, then the MACC Reanalysis would be perfect for this application. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed because it had not seen recent activity. The issue can always be reopened at a later date. |
We will work on developing this notebook. In particular - after setting the notebook as mentioned in issue #1 we suggest to do the following:
We suggest to do it for a subset of MERRA2 data, but if @dopplershift has some suggestion about which data to use (or maybe he has done this to some extent to test MetPy) we welcome his input.
We are at a very early stage of this, but will start working on it in the next days.
The text was updated successfully, but these errors were encountered: