-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: update integration test to use round 12012 #295
Conversation
Nowadays, it takes ~50 seconds to fetch a batch of measurements. This makes re-running the integration test way too slow. In this change, I am adding `fetchMeasurementsWithCache` to cache the measurements on subsequent runs. Signed-off-by: Miroslav Bajtoš <oss@bajtos.net>
We need a more recent round for which the spark-api endpoint returns "startEpoch" so that we can obtain the DRAND beacon. Signed-off-by: Miroslav Bajtoš <oss@bajtos.net>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code changes lgtm, but please remind me why we need to update the round to use?
For the deterministic task assignment, we need to map Spark round to DRAND's epoch. We do so by mapping the Filecoin epoch (block number) when the Spark started to a point in time, and then mapping that point in time to DRAND epoch. We started tracking Spark round start epoch relatively recently, we don't have it for the round 3602n. spark-api returns
|
total_measurements: '15889i', | ||
valid_measurements: '15889i' | ||
total_measurements: '44677i', | ||
valid_measurements: '44677i' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is interesting. Spark is recording ~3x more measurements per batch now.
Links: