You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Given a temporally dense time series, over a wide enough area, multitemporal's memory requirements can move into the realm of tooo much. Especially in the case where you will output an entire time series (after working some magic on it) -- that sends things passed 20GiB.
Current implementation
reads in block-wise, but holds all the data until the end and writes out everything in one shot.
offline, @justinfisk mentioned that there might have been some reason that the output was not done in chunks. Is that ringing any bells for either of you?
My recollection is that after some initial discussions, at which point the current framework already existed, I just didn't want to write the code that would save and then reassemble the chunks. So, basically laziness. As long as each step in the processing chain has the data it needs then I can't think of a reason not to keep everything as chunks until the end if that is more performant.
Description:
Given a temporally dense time series, over a wide enough area, multitemporal's memory requirements can move into the realm of tooo much. Especially in the case where you will output an entire time series (after working some magic on it) -- that sends things passed 20GiB.
Current implementation
reads in block-wise, but holds all the data until the end and writes out everything in one shot.
Proposed resolution:
write out data block-wise as well.
@bhbraswell @justinfisk -- seems reasonable, no?
The text was updated successfully, but these errors were encountered: