Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New balancer strategy: sortingCost #13254

Conversation

AmatyaAvadhanula
Copy link
Contributor

@AmatyaAvadhanula AmatyaAvadhanula commented Oct 23, 2022

Add a new "sortingCost" Balancer Strategy which is faster than cachingCost while being identical to cost in all cases.

Description

#2972 proposes a cost function for Segment balancing.
While this helps with an optimal distribution of segments across servers, it can be slow on large clusters.

cachingCost Strategy was an attempt to make the same decisions as cost, but faster. However, there are a few discrepancies in the current implementation which lead to uneven distribution and slower convergence in the presence of segments with multiple granularities

This PR introduces sortingCost which is a simple optimization of the original cost. It produces the same cost function in the presence of multiple granularities while being just as fast, if not faster, than cachingCost

Add sortingCost strategy

The perf improvemnts can be checked by running SortingCostComputerTest#perfComparisonTest
With 100k segments, cost computation is 2000x faster. However the overall coordinator cycle is unlikely to be affected as drastically.

Add simulation

Simulate with different cost strategies using SegmentLoadingTest#testLoadAndBalanceSeveral.

Here are the results of the simulation with about 30k segments of hourly, weekly and yearly granularity over 500 iterations.
Segments were loaded for 50 iterations among 3 historicals and then balanced for the rest after adding 2 more historicals.

cachingCost : 317590 ms

  • Server[tier_t1__hist__1, historical, tier_t1] has 1 left to load, 0 left to drop, 9,498 served, 10 bytes queued, 27,930 bytes served.
  • Server[tier_t1__hist__3, historical, tier_t1] has 0 left to load, 0 left to drop, 9,363 served, 0 bytes queued, 28,308 bytes served.
  • Server[tier_t1__hist__2, historical, tier_t1] has 1 left to load, 0 left to drop, 10,193 served, 1 bytes queued, 28,490 bytes served.
  • Server[tier_t1__hist__5, historical, tier_t1] has 35 left to load, 0 left to drop, 14,432 served, 89 bytes queued, 43,781 bytes served.
  • Server[tier_t1__hist__4, historical, tier_t1] has 18 left to load, 0 left to drop, 15,914 served, 45 bytes queued, 46,451 bytes served.

cost : 765574 ms

  • Server[tier_t1__hist__4, historical, tier_t1] has 16 left to load, 0 left to drop, 11,659 served, 25 bytes queued, 34,492 bytes served.
  • Server[tier_t1__hist__5, historical, tier_t1] has 10 left to load, 0 left to drop, 11,927 served, 19 bytes queued, 34,760 bytes served.
  • Server[tier_t1__hist__2, historical, tier_t1] has 12 left to load, 0 left to drop, 11,575 served, 12 bytes queued, 34,867 bytes served.
  • Server[tier_t1__hist__1, historical, tier_t1] has 8 left to load, 0 left to drop, 12,144 served, 8 bytes queued, 35,274 bytes served.
  • Server[tier_t1__hist__3, historical, tier_t1] has 11 left to load, 0 left to drop, 12,095 served, 29 bytes queued, 35,567 bytes served.

sortingCost : 266421 ms

  • Server[tier_t1__hist__4, historical, tier_t1] has 12 left to load, 0 left to drop, 11,492 served, 21 bytes queued, 34,001 bytes served.
  • Server[tier_t1__hist__2, historical, tier_t1] has 3 left to load, 0 left to drop, 10,258 served, 21 bytes queued, 34,036 bytes served.
  • Server[tier_t1__hist__5, historical, tier_t1] has 28 left to load, 0 left to drop, 13,052 served, 37 bytes queued, 35,390 bytes served.
  • Server[tier_t1__hist__1, historical, tier_t1] has 20 left to load, 0 left to drop, 12,256 served, 38 bytes queued, 35,701 bytes served.
  • Server[tier_t1__hist__3, historical, tier_t1] has 8 left to load, 0 left to drop, 12,342 served, 17 bytes queued, 35,832 bytes served.

This PR has:

  • been self-reviewed.
  • added documentation for new or modified features or behaviors.
  • a release note entry in the PR description.
  • added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links.
  • added or updated version, license, or notice information in licenses.yaml
  • added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
  • added unit tests or modified existing tests to cover new code paths, ensuring the threshold for code coverage is met.
  • added integration tests.
  • been tested in a test Druid cluster.

Copy link

github-actions bot commented Jul 4, 2024

This pull request has been marked as stale due to 60 days of inactivity.
It will be closed in 4 weeks if no further activity occurs. If you think
that's incorrect or this pull request should instead be reviewed, please simply
write any comment. Even if closed, you can still revive the PR at any time or
discuss it on the dev@druid.apache.org list.
Thank you for your contributions.

@github-actions github-actions bot added the stale label Jul 4, 2024
Copy link

github-actions bot commented Aug 1, 2024

This pull request/issue has been closed due to lack of activity. If you think that
is incorrect, or the pull request requires review, you can revive the PR at any time.

@github-actions github-actions bot closed this Aug 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant