Skip to content
This repository has been archived by the owner on Nov 6, 2020. It is now read-only.

1.9.5 block prop times #8264

Closed
atlanticcrypto opened this issue Mar 29, 2018 · 6 comments
Closed

1.9.5 block prop times #8264

atlanticcrypto opened this issue Mar 29, 2018 · 6 comments
Labels
F7-footprint 🐾 An enhancement to provide a smaller (system load, memory, network or disk) footprint. M4-core ⛓ Core client code / Rust. P2-asap 🌊 No need to stop dead in your tracks, however issue should be addressed as soon as possible.
Milestone

Comments

@atlanticcrypto
Copy link

I'm running:

  • Which Parity version?: 1.9.5
  • Which operating system?: Ubuntu Server 16.04
  • How installed?: via installer
  • Are you fully synchronized?: yes
  • Which network are you connected to?: ethereum
  • Did you try to restart the node?: yes

After upgrade to 1.9.5 to solve the peering issues from the stable 1.8 series, the 1.9.5 stable releases running on our production node servers are realizing horrible lag on block prop times. These are geographically diverse machines hosted in our facilities, and the 3rd party hosted cloud instances we have running that have historically had higher latency are still running 1.8.9 stable and they are running considerably better than the 1.9.5 release. Downgrading on our production nodes to 1.8.11 stable solved the prop time issue. Has to be something in the 1.9.x series.

@5chdn 5chdn added P2-asap 🌊 No need to stop dead in your tracks, however issue should be addressed as soon as possible. F7-footprint 🐾 An enhancement to provide a smaller (system load, memory, network or disk) footprint. M4-core ⛓ Core client code / Rust. labels Apr 3, 2018
@5chdn 5chdn added this to the 1.11 milestone Apr 3, 2018
@5chdn 5chdn modified the milestones: 1.11, 1.12 Apr 24, 2018
@folsen
Copy link
Contributor

folsen commented May 21, 2018

@Njcrypto we've done a lot of work on the peering issues in 1.10 and 1.11, can you please try 1.11.1 and report back here if the issue persist. I will close the issue in the meantime but will reopen if you report back with issues. Cheers.

@folsen folsen closed this as completed May 21, 2018
@atlanticcrypto
Copy link
Author

@folsen I installed a 1.10.3 stable node at a new facility last week - set min peer count to 1000 - current peer connections total 36.

We are still having peer drop issues with our other nodes (running 1.8.x stable), but not as bad as the 1.10.x series now. I believe this is the driver of block prop latency - I am unable to sustain a large peer connection set.

In the 1.7.5+ series I was able to maintain 500+ peer connections on each node, and our block prop times were awesome.

I also believe this peer set issue is linked to my other post on orphan rates.

Unfortunately with the node network being so diverse in quality (meaning, professionally hosted nodes AND college dorm room desktop nodes), maintaining that large peer group is extremely important.

I would manually set my reserved peers to be the most robust - but the lack of transparency on node performance (at least from what I've found) limits the ability to do that.

Is there a way to set reserved peers based upon their uptime and performance?

@folsen
Copy link
Contributor

folsen commented May 21, 2018

We've introduced some improvements in 1.11 as well, but so far deleting nodes.json and setting --no-discovery seems to help peoples peering issues the most. I myself run a bootnote with --min-peers 150 and 1000 max, and always have around 400 peers connected. This is running on 1.10.x. There were some more improvements made in 1.11, doing more stats-tracking of peers, we also have a feature in the pipeline to do longer-term banning of bad peers. If you could try 1.11 as well that would be helpful. Any other network config you could share also helps.

Is there a way to set reserved peers based upon their uptime and performance?

We're adding more and more heuristics to sort of simulate this.

The 1.7.5+ series was also a different "era", I'm not sure it's fair to compare to that time unless you've tried it more recently. Full nodes are dropping like flies these days, people just don't run them anymore.

All that said 36 sounds extremely low. Opening this issue up again for investigation, as there might be something else going on.

@folsen folsen reopened this May 21, 2018
@atlanticcrypto
Copy link
Author

Just restarted the 1.10.x node with discovery option set to false and removed the nodes.json file. This was a clean install though, so removing that nodes.json file seems like a band-aid?

I'll let you know how this is performing in a few hours.

@5chdn 5chdn modified the milestones: 2.0, 2.1 Jul 17, 2018
@5chdn 5chdn modified the milestones: 2.1, 2.2 Sep 11, 2018
@5chdn 5chdn modified the milestones: 2.2, 2.3 Oct 29, 2018
@Tbaut
Copy link
Contributor

Tbaut commented Nov 16, 2018

I believe this has been drastically improved in the past releases. Closing as stale.

@Tbaut Tbaut closed this as completed Nov 16, 2018
@5chdn
Copy link
Contributor

5chdn commented Nov 27, 2018

If not, #9954 - see also #9576

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
F7-footprint 🐾 An enhancement to provide a smaller (system load, memory, network or disk) footprint. M4-core ⛓ Core client code / Rust. P2-asap 🌊 No need to stop dead in your tracks, however issue should be addressed as soon as possible.
Projects
None yet
Development

No branches or pull requests

4 participants