-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add daily scheduled rewards #131
Conversation
@bajtos do you have an idea how to best test this? I would suggest to pass a mocked IE contract and then query if scheduled rewards have been recorded to the DB. |
Another question is how to get the participant addresses. Query the contractWhen we query the contract, we only get those with scheduled rewards over 0.1 FIL. That is insufficient for plotting the road towards the threshold. Add to spark-api/evaluateTwo potential sources for participants are But how should these services expose the participants, in a way that is not opinionated? (Eg after which period of inactivity should the participant be removed from the list? Should it be queryable by date? Etc). Query
|
I think That means,
My thoughts on these options:
Downloading measurements from web3.storage often fails. Atm only spark-evaluate does this, now we would have a second service that can fail because of it. It's a known problem, but also we'd be investing more into it. It will double our web3.storage egress traffic. I think that shouldn't be an issue, but it's wasteful. This will give This option is quite easy to implement, and shouldn't take more than 1 day.
This couples us to Beryx, if they decide to change their API, we must hope to find another one that works.
Adding a new event isn't bad, but migrating to a new contract is painful and takes time. |
observer/lib/observer.js
Outdated
try { | ||
scheduledRewards = await ieContract.rewardsScheduledFor(address) | ||
} catch (err) { | ||
console.error('Error querying scheduled rewards for', address, { cause: err }) | ||
continue | ||
} | ||
console.log('Scheduled rewards for', address, scheduledRewards) | ||
await pgPool.query(` | ||
INSERT INTO daily_scheduled_rewards (day, address, scheduled_rewards) | ||
VALUES (now(), $1, $2) | ||
ON CONFLICT (day, address) DO UPDATE SET | ||
scheduled_rewards = EXCLUDED.scheduled_rewards | ||
`, [address, scheduledRewards]) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is going to be very expensive. With 5k daily active participants (see Spark Public Dashboard), we are going to make 5k RPC API calls followed by 5k SQL queries.
(1)
Can we group RPC API calls and SQL queries into batches? I believe Ethers v6 supports request batching, and we are configuring the RPC API provider to use batching. Now, we need to find out how to trigger the batching behaviour.
What I would like to see at high level:
const BATCH_SIZE = 10 // for example, we need to tweak this
for (const addrBatch of splitIntoBatches(dailyParticipantAddresses, BATCH_SIZE)) {
// is this going to trigger Ethers.js batching behaviour?
const rewards = await Promise.all(
addrBatch.map(addr => await ieContract.rewardsScheduledFor(addr))
)
// run a single SQL query to update multiple rows
await pgPool.query(`
INSERT INTO daily_scheduled_rewards (day, address, scheduled_rewards)
VALUES (now(), unnest($1::text[]), unnest($2::numeric[]))
ON CONFLICT (day, address) DO UPDATE SET
scheduled_rewards = EXCLUDED.scheduled_rewards
`, [
addrBatch,
rewards
])
}
(The query is inspired by an existing query in spark-evaluate here).
(2)
We must be mindful of how many requests we send to Glif RPC API. IMO, we shouldn't send all 5k queries as fast as the systems can handle as that would put too much load of the RPC API provider.
I propose to introduce a small delay between iterations of this loop.
If we batch requests for each 10 participants, we need to send ~500 requests. If we use a 1s
delay, then we will finish updating all participants in 500 seconds = 8.3 minutes. I think that's fast enough since the scheduled rewards are updated every 20 minutes.
Not really. A mocked IE contract sounds like a good start to me. We already used that approach in voyager-publisher tests IIRC (see filecoin-station/voyager-api#22)
Great description of different options available 👌🏻 I posted my thoughts in #131 (comment) before I read your comments.
💯 Instead of adding
As I explained in #131 (comment), spark-evaluate maintains the table As for opinions about inactivity & querying by date - I guess the current solution is somewhat opinionated to serve the needs of the dashboard, but I also think it's flexible enough to support additional use cases, e.g. your work in this pull request. -- Depending on how much time you are willing to spend on this feature, I propose to choose one of the following two ways forward:
|
Awesome, +1 to using I agree that adding the event is the right solution, but I don't think this is the right time for this |
Co-authored-by: Miroslav Bajtoš <oss@bajtos.net>
Co-authored-by: Miroslav Bajtoš <oss@bajtos.net>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Blocked by #102For filecoin-station/desktop#1552