Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to autoscale slaves? #1066

Closed
max-rocket-internet opened this issue Aug 16, 2019 · 12 comments
Closed

Is it possible to autoscale slaves? #1066

max-rocket-internet opened this issue Aug 16, 2019 · 12 comments

Comments

@max-rocket-internet
Copy link
Contributor

For example when running on Kubernetes this is very easy to set up by adding a HorizontalPodAutoscaler but will locust be OK with new slaves connecting to the master as the users are increased?

@cgoldberg
Copy link
Member

please ask general questions in the Slack channel

@max-rocket-internet
Copy link
Contributor Author

@cgoldberg

It's not a general question. It's a technical question.

Is it possible to autoscale slaves?
How does the master distribute the task to a slave the connects long after the load test has started?
How does this affect the hatch rate?

Also, sending people to the Slack channel is really annoying. It's not searchable from Github. It's not indexed from search engines. It requires signing up and all the messages and info about this open source project is then owned by a separate company.

@cgoldberg
Copy link
Member

This is a tracker for the development of Locust, not a user support forum. It exists solely to keep track of development issues that need to be fixed.

@max-rocket-internet
Copy link
Contributor Author

@cgoldberg

It exists solely to keep track of development issues

OK, makes sense I guess, I didn't realise this fact.

@aldenpeterson-wf
Copy link
Contributor

aldenpeterson-wf commented Aug 26, 2019

Keep in mind that the template for any issue created here contains (I removed the comment part):

<-- For general questions about how to use Locust, use either the Slack link provided in the Readme or ask a question on Stack Overflow tagged Locust.-->

The best place for questions are either Slack or Stack Overflow. Even better is a PR to update documentation too :)

@max-rocket-internet
Copy link
Contributor Author

@cgoldberg @aldenpeterson-wf

Would you not consider using issue labels like many other repos? I understand you don't want to provide user support, and that's totally fine, but as you can see most people view Github issues for a project as a go to place for users as well as developers. With labels and filtering you can support both without too much headache 😅

Pushing people to Slack is easy but it's so ephemeral, even for the people that do go to the effort of creating an account.

@matti
Copy link

matti commented Nov 13, 2019

@max-rocket-internet yes, it is definitely possible, but be aware of the non-cloud native nature of locust (#1136)

@heyman
Copy link
Member

heyman commented Nov 13, 2019

but be aware of the non-cloud native nature of locust (#1136)

I've used Locust in cloud environments since 2011, and I don't think it's fair to call it "non-cloud native" on the basis that you can't arbitrarily kill the master process. You could say it's not built for high availability though.

will locust be OK with new slaves connecting to the master as the users are increased?

EDIT: I previously said that Locust doesn't automatically re-distribute the load when new slave nodes connected, which is wrong. Completely forgot that we now do support re-distributing the load. My bad.

@max-rocket-internet
Copy link
Contributor Author

we now do support re-distributing the load. My bad.

Awesome! How does this work exactly? What does the master do it if has 10 slaves running 10 clients and an 11th slave connects?

@matti
Copy link

matti commented Nov 13, 2019

@max-rocket-internet it resets everything to 0 and distributes the new value to all slaves. atleast it looks like this (see #1143)

so if you have a target of 10000 and 10 slaves it will first run 10000/10 on every slave and then 10000/11, but some slaves go down to zero (atleast it reports them as 0) which makes it kinda slow to come back at the previous value.

@heyman
Copy link
Member

heyman commented Nov 13, 2019

@max-rocket-internet The master will send out new hatch messages to the slave nodes. Which will result in the existing 10 nodes killing ~90 of their running locust users, and the new node spawning ~900 users.

A bug was causing a temporary drop in the current RPS when this happened, as reported by #1143, but that has now been fixed in master.

@max-rocket-internet
Copy link
Contributor Author

Amazing 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants