-
Notifications
You must be signed in to change notification settings - Fork 378
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consume fewer XIDs when restarting primary #8290
base: main
Are you sure you want to change the base?
Conversation
The pageserver tracks the latest XID seen in the WAL, in the nextXid field in the "checkpoint" key-value pair. To reduce the churn on that single storage key, it's not tracked exactly. Rather, when we advance it, we always advance it to the next multiple of 1024 XIDs. That way, we only need to insert a new checkpoint value to the storage every 1024 transactions. However, read-only replicas now scan the WAL at startup, to find any XIDs that haven't been explicitly aborted or committed, and treats them as still in-progress (PR #7288). When we bump up the nextXid counter by 1024, all those skipped XID look like in-progress XIDs to a read replica. There's a limited amount of space for tracking in-progress XIDs, so there's more cost ot skipping XIDs now. We had a case in production where a read replica did not start up, because the primary had gone through many restart cycles without writing any running-xacts or checkpoint WAL records, and each restart added almost 1024 "orphaned" XIDs that had to be tracked as in-progress in the replica. As soon as the primary writes a running-xacts or checkpoint record, the orphaned XIDs can be removed from the in-progress XIDs list and hte problem resolves, but if those recors are not written, the orphaned XIDs just accumulate. We should work harder to make sure that a running-xacts or checkpoint record is written at primary startup or shutdown. But at the same time, we can just make XID_CHECKPOINT_INTERVAL smaller, to consume fewer XIDs in such scenarios. That means that we will generate more versions of the checkpoint key-value pair in the storage, but we haven't seen any problems with that so it's probably fine to go from 1024 to 128.
Storage team: any concerns from generating more churn on the single "checkpoint" key-value pair? |
3042 tests run: 2927 passed, 0 failed, 115 skipped (full report)Code coverage* (full report)
* collected from Rust tests only The comment gets automatically updated with the latest test results
62b1e07 at 2024-07-05T17:32:17.337Z :recycle: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
based on my understanding, this parameter will affect the frequency of writing to CHECKPOINT_KEY
. 1024->128 means 8 times more writes. Given the checkpoint file is small, I don't think it would be a huge concern for the page server storage.
Can we get a test on this PR that reproduces the many-transactions case that the change fixes? That would be very useful to let us instrument this case (in future we might add some key-level limit on delta depths). I'm not sure if there are tests elsewhere for RO replicas that cover this kind of thing: feels like the original issue was probably severe enough to warrant a reproducer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we can avoid alignment at all.
It doesn't show any noticeable impact on pgbench workload in my tests...
Just so I understand correctly: the workload that caused this production problem would take 8 times as long to hit this bug after we merge this PR, correct? And the hope is that within that 8 times longer time window, the primary will write a running-xacts or checkpoint record. Doesn't seem like a systematic fix to me, but, I don't have big concerns wrt to storage.
Edit: evidently readonly txns don't consume xids. The An Maybe a stupid idea, but, why don't we allow basebackup requests only for checkpoint LSNs? |
Actually now we perform fast shutdown and so write shutdown checkpoint with proper So it is not correct to say that to reproduce the bug we need to perform 8 times more restarts. Most likely it will not owe reproduced at all. |
Size of transaction commit record is usually larger. So adding checkpoint record doesn't somehow significantly increase storage consumption. Especially if it is done each 128 transactions. |
Sorry, I do not completely understand the idea. Actually there is another solution. I wonder if @neondatabase/storage team will want to implement it. But certainly it will complicate PS. |
Let's take this off GitHub => https://neondb.slack.com/archives/C03QLRH7PPD/p1720792077031629 For the record I have no objections to this PR, and I don't expect anything to break. |
The pageserver tracks the latest XID seen in the WAL, in the nextXid field in the "checkpoint" key-value pair. To reduce the churn on that single storage key, it's not tracked exactly. Rather, when we advance it, we always advance it to the next multiple of 1024 XIDs. That way, we only need to insert a new checkpoint value to the storage every 1024 transactions.
However, read-only replicas now scan the WAL at startup, to find any XIDs that haven't been explicitly aborted or committed, and treats them as still in-progress (PR #7288). When we bump up the nextXid counter by 1024, all those skipped XID look like in-progress XIDs to a read replica. There's a limited amount of space for tracking in-progress XIDs, so there's more cost ot skipping XIDs now. We had a case in production where a read replica did not start up, because the primary had gone through many restart cycles without writing any running-xacts or checkpoint WAL records, and each restart added almost 1024 "orphaned" XIDs that had to be tracked as in-progress in the replica. As soon as the primary writes a running-xacts or checkpoint record, the orphaned XIDs can be removed from the in-progress XIDs list and hte problem resolves, but if those recors are not written, the orphaned XIDs just accumulate.
We should work harder to make sure that a running-xacts or checkpoint record is written at primary startup or shutdown. But at the same time, we can just make XID_CHECKPOINT_INTERVAL smaller, to consume fewer XIDs in such scenarios. That means that we will generate more versions of the checkpoint key-value pair in the storage, but we haven't seen any problems with that so it's probably fine to go from 1024 to 128.