Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bring back Step Load Pattern or support a centralized load scheduler to ensure load consistency in distributed mode #1734

Closed
delulu opened this issue Mar 18, 2021 · 9 comments
Labels
feature request stale Issue had no activity. Might still be worth fixing, but dont expect someone else to fix it

Comments

@delulu
Copy link
Contributor

delulu commented Mar 18, 2021

Is your feature request related to a problem? Please describe.

With the previous Step Load Pattern, it's much easier for us to change the load plan from UI portal or with command line parameters:

  1. It's better to decouple the load plan from the locustfile, so we don't have to update each locustfile if there's a load plan change.
  2. We'll usually use different load against the same service of different scale size, and we can parameter the load plan together with the scale size of target service.

It might bring a consistency issue when using custom load shape in distributed mode:

  1. The custom load plan is executed at worker side, and there's no way to ensure all the workers change the load at the same time.
  2. In Kubernetes world it's a common case that pods are being restarted or reassigned to other vm node due to capacity issue. So locust work might get restarted frequently, how to recover the previous load status when work restarts, and how to ensure all the workers are simulating the same load?

Describe the solution you'd like

I'd like to have Step Load Pattern back, or another centralized load scheduler to ensure load consistency in distributed mode.

@delulu
Copy link
Contributor Author

delulu commented Mar 18, 2021

@heyman @cyberw @max-rocket-internet please have a review, thank you!

@cyberw
Copy link
Collaborator

cyberw commented Mar 21, 2021

Is there anything that wouldnt be solved by #1621 ? (if it does get merged, I mean)

@max-rocket-internet
Copy link
Contributor

please have a review, thank you!

Apart from the added complexity it creates in the project, I don't really have an opinion on adding it back.

But I do think a more general approach might be better for everyone, for example what is detailed in #1632 could be used for step load or any load test where the user wants to tweak settings from the web UI.

@max-rocket-internet
Copy link
Contributor

In Kubernetes world it's a common case that pods are being restarted or reassigned to other vm node due to capacity issue. So locust work might get restarted frequently, how to recover the previous load status when work restarts, and how to ensure all the workers are simulating the same load?

How is this related to bring step load back?

If the master is restarted then forget it, your load test is over and metrics are lost. But workers coming and going should be OK in the current version of locust?

I recommend using a using a PodDisruptionBudget for the master as shown here: https://github.com/deliveryhero/helm-charts/blob/master/stable/locust/templates/master-pdb.yaml 🙂

@delulu
Copy link
Contributor Author

delulu commented Apr 29, 2021

How is this related to bring step load back?

If the master is restarted then forget it, your load test is over and metrics are lost. But workers coming and going should be OK in the current version of locust?

Yes we can have some mechanism to ensure the healthy of master node for example pdb, or back up the master state for recovery.

While for the worker state recovery, in previous step load it's using a centralized way to distribute the traffic, so when a worker leaves or joins the locust cluster the master will redistribute the latest load tasks to all the workers, it'll recover the previous load status.

And in current way of custom load shape, the load plan is being executed at worker side. So when a worker restarts, it'll run from the start.

For example if a test plan is a step load plan from 10 to 100 users, and some workers restarts when it hits the 50 users, the workers will run from the start with the initial load, and it's not the previous load of 50 users.

@delulu
Copy link
Contributor Author

delulu commented Apr 29, 2021

Is there anything that wouldnt be solved by #1621 ? (if it does get merged, I mean)

There's no example locustfile, but it claims that "The master is now responsible to compute the distribution of users and then dispatching a portion of this distribution to each worker. " which seems to support a centralized load scheduler.

While it seems the load plan is still coupled with locustfile, rather then being exposed in locust master webui or web api.

@cyberw
Copy link
Collaborator

cyberw commented Jun 17, 2021

I'd love to see a simple user friendly solution (probably based on load test shapes) that does something similar to what step load did. Maybe after #1724 1621 though.

@github-actions
Copy link

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 10 days.

@github-actions github-actions bot added the stale Issue had no activity. Might still be worth fixing, but dont expect someone else to fix it label Aug 17, 2021
@github-actions
Copy link

This issue was closed because it has been stalled for 10 days with no activity. This does not necessarily mean that the issue is bad, but it most likely means that nobody is willing to take the time to fix it. If you have found Locust useful, then consider contributing a fix yourself!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request stale Issue had no activity. Might still be worth fixing, but dont expect someone else to fix it
Projects
None yet
Development

No branches or pull requests

3 participants