Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[5.x] Make autoscaling rate configurable #874

Merged
merged 11 commits into from
Aug 20, 2020
Merged

[5.x] Make autoscaling rate configurable #874

merged 11 commits into from
Aug 20, 2020

Conversation

hivokas
Copy link
Contributor

@hivokas hivokas commented Aug 18, 2020

This PR makes autobalancing rate configurable for applications that use the auto queue balancing strategy.

Problem

Imagine we have the following supervisor config:

'supervisor-1' => [
    'connection' => 'redis',
    'queue' => ['high', 'low'],
    'balance' => 'auto',
    'minProcesses' => 5,
    'maxProcesses' => 150,
    'tries' => 1,
],

Let's say we dispatch 1,000 jobs to the low queue for something like an email campaign. Currently, autobalancing will only adjust the number of workers at a maximum rate of ±1 worker every 3 seconds. So, it will take a long time to increase the number of workers from 5 (minProcesses) to 150 (maxProcesses). There's no way to change this.

Solution

With this PR we make the autobalancing rate configurable:

/*
|--------------------------------------------------------------------------
| Autoscaling
|--------------------------------------------------------------------------
|
| Here you may define the autoscaling settings used by your application.
|
*/

'autoscaling' => [
    'cooldown' => 3,
    'max_shift' => 1,
],

cooldown - the number of seconds to wait in between auto-scaling attempts
max_shift - the maximum number of processes to increase or decrease per one scaling

Breaking changes

This PR doesn't introduce any breaking changes since the default values (for cooldown and max_shift) are consistent with the current values. Even if someone upgrades without adding these config values, it will still use those defaults.

However, I decided that it might be better to point this PR to the 5.x branch.

Benefit

Let's perform a test by dispatching 750 jobs at once (each job sleeps for 5 seconds) without this change. We can see it takes 135 seconds to process these jobs.

Let's set 'cooldown' => 1 and 'max_shift' => 50 and perform the same test one more time.

This time, it took only 30 seconds, because autoscaling happens much faster.

Related issues

#658

@taylorotwell
Copy link
Member

What if you want the cool_down and max_shift to be different for each supervisor configuration? Any reason it's not part of that configuration?

@hivokas
Copy link
Contributor Author

hivokas commented Aug 19, 2020

What if you want the cool_down and max_shift to be different for each supervisor configuration?

Hm, that sounds like a good idea.

Any reason it's not part of that configuration?

The reason is I didn't think about that.

@hivokas
Copy link
Contributor Author

hivokas commented Aug 19, 2020

Does the following example of config look good for you @taylorotwell?

'supervisor-1' => [
    'connection' => 'redis',
    'queue' => ['high', 'low'],
    'balance' => 'auto',
    'minProcesses' => 5,
    'maxProcesses' => 150,
    'tries' => 1,
    'maxShift' => 20,
    'cooldown' => 1,
],

@hivokas
Copy link
Contributor Author

hivokas commented Aug 19, 2020

If yes, I'll work on moving cooldown and max_shift to the supervisor config.

@taylorotwell
Copy link
Member

taylorotwell commented Aug 20, 2020

Maybe rename cooldown to balanceCooldown?

Other than that it looks good.

@hivokas
Copy link
Contributor Author

hivokas commented Aug 20, 2020

@taylorotwell I've updated the PR. balanceCooldown and autoScaleMaxShift are the part of a supervisor configuration now.
Also, I've updated the default config a bit.

@taylorotwell taylorotwell merged commit e807042 into laravel:master Aug 20, 2020
@taylorotwell
Copy link
Member

Renamed autoScaleMaxShift to balanceMaxShift.

@hivokas
Copy link
Contributor Author

hivokas commented Aug 20, 2020

Thanks for merging!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants