You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To get the IRFS I have calculated the average noise, zenith and azimuth for the entire run and then used the VEGAS interpolation. One thing that I would like to check is that the run is not too long for doing this - it might be that we should be using 10 minute bins or ... (I know that Nathan used 8minute bins but he was at LZA so there will be a lot faster variation in the EA with the parameters). This could be done by either producing multiple files for each run or by doing some internal averaging and weighting.
The text was updated successfully, but these errors were encountered:
Is there any limitation to produce, for instance, N DL3 files per anasum/stage5?
Subdividing a run between shorter periods (given we have the zenith, azimuth, noise level, etc... per event) only means we need to generate more (smaller) DL3 files, with a finer interpolation of the IRFs (easy, same IRF file). Would need to improve a bit the current code to allow that, but perfectly doable
The calculation of a correct lifetime per sub-run could be a problem. Not sure how it is currently calculated in ED/VEGAS, but at least from what I remember in MAGIC, we probably want to do it before having only gamma-like events
If I reach the point where I really want to implement this, I will speak with Gernot/Elisa and bother you guys again. Until then, I will try to have a first example of full-enclosure IRFs (averaged over the whole run).
To get the IRFS I have calculated the average noise, zenith and azimuth for the entire run and then used the VEGAS interpolation. One thing that I would like to check is that the run is not too long for doing this - it might be that we should be using 10 minute bins or ... (I know that Nathan used 8minute bins but he was at LZA so there will be a lot faster variation in the EA with the parameters). This could be done by either producing multiple files for each run or by doing some internal averaging and weighting.
The text was updated successfully, but these errors were encountered: