-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update SOKOL to parity 1.10 #107
Comments
@phahulin what do you think about having different versions of parity on the chain just in case something will go wrong with one version we'll have some nodes on a different version. Ideally will be to have multiple clients, e.g. go-ethereum implementation compatible with AuRa but we don't have it now. What we could do to mitigate the risk is to have multiple versions of the same client at the same time. What do you think? |
@igorbarinov I agree. I propose the following:
|
I thought the upgrade path from Parity 1.8.4 to 1.9.2 went well, and what Igor describes seems to follow that pattern. Identifying at least some nodes to hold in state is a good idea, particularly if option to upgrade is left to rest of the network. Having a known quantity of baseline nodes may help others feel more comfortable upgrading their nodes. There is less danger of breaking network when a baseline exists, so less reticence to perform upgrades. I like and support the staged approach as described. Looking forward to this next phase! |
@phahulin we could randomly select three nodes which will not upgrade on Sokol/Core if it's not hard fork |
I think at last upgrade for POA Core, DevTeam just designated which nodes stayed on current code, and others upgrade. That worked will. This also handles a situation where if, perhaps, a Validator is slow to upgrade their node, one of the reserve nodes can upgrade in that window, and the roles can be exchanged. I thought the last effort went well, and encourage more, along with Igor's suggestions. You guys are doing a great job! |
@igorbarinov then I suggest we always update MoC, and then select 2 validator nodes and 1 bootnode that will skip the update. |
This sounds reasonable to me. |
Successfully upgrade Jim O SOKOL Validator Chicago A to Parity version 1.10.0. Will reserve upgrading Jim O SOKOL Validator Chicago B until directed by POA DevTeam. |
@phahulin let's have the same setup for MoC (stable+stage version)? |
@igorbarinov second instance of MoC is launched, on CentOS |
Actually, we shouldn't make a completely random selection of validator nodes to skip the update - it would be better to choose in such a way that the gap between them on validators list is approximately the same. In case of other nodes failure, network would still produce blocks at roughly constant intervals. That makes it Jim O'Regan (1 of 2 nodes) and Henry Vishnevsky skip the update this time. |
This one can be closed now |
User experience improvement PR
Update instructions: https://github.com/poanetwork/poa-devops/blob/master/docs/Update-parity-version.md
Things to prepare
Playbook:
Parity binary for orchestrator (https://github.com/phahulin/parity/tree/v1.10.0-disable-parity-whisper-extensions)
Not sure about
/api/health
endpoint - doesn't work without dapps enabled, maybe re-enable them (health api with disabled dapps openethereum/parity-ethereum#8245, (SOKOL UPD 1.10) Revert explicitly disabling dapps on bootnodes #108)Update default version in playbook config ((SOKOL UPD1.10) Update defaults for parity 1.10 #106)
Launch second MoC node on parity 1.9.2 and CentOS
Things to do
Update nodes (round I):
Master of Ceremony (CentOS)(skips update)Ping Andrew Cravenho to check poa explorer
Update nodes (round II):
Update nodes (round III):
Henry Vishnevsky(skips update)Jim O'Regan SOKOL Validator Chicago A(skips update)MM Azure EastUS Bootnode(skips update)Check consensus
Update archiving nodes (round IV):
The text was updated successfully, but these errors were encountered: