Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CPU bound from locust vs CPU bound from the web app? #188

Closed
ghost opened this issue Aug 16, 2014 · 4 comments
Closed

CPU bound from locust vs CPU bound from the web app? #188

ghost opened this issue Aug 16, 2014 · 4 comments

Comments

@ghost
Copy link

ghost commented Aug 16, 2014

Hi all,

When locust is running with multiple slaves and the web app is also running with multiple processes, it is sometimes difficult to interpret the resulting QPS. In particular, it is unclear if the QPS is:

  1. CPU bound on the locust side (i.e., more slaves are needed)
  2. CPU bound on the web app side (i.e., more processes are needed)
  3. I/O bound (network, database, etc)

Probably 3) is trickier (isn't it?), but is there a simple way in locust to distinguish between 1) and 2)?

Thanks,

@Jahaja
Copy link
Member

Jahaja commented Aug 16, 2014

Hi,

  1. I would recommend that you add another locust slave if the current ones are >= 75-80% CPU utilization. Locust should never be close to being a limiting factor.
  2. In this case you should be seeing increasing response times when adding more users - the RPS stays the same. We usually set an "acceptable response time threshold" for our tests, say 250ms, and try to find a maximum RPS/users that the application can manage within that threshold.
  3. It's worth checking if the locust machine(s) is being limited by network I/O. Use something like iftop to see if network usage seems fixed around some threshold even if you increase the number of users (and given that locust's cpu utilization isn't a concern).

@ghost
Copy link
Author

ghost commented Aug 18, 2014

That's very helpful, thanks! Just to clarify one point:

In this case you should be seeing increasing response times when adding more users - the RPS stays the same.

Why would the response time increase? Is it because the requests waiting to be processed accumulate in some kind of queue, which causes some overhead for the web app?

Thanks,

@heyman
Copy link
Member

heyman commented Aug 18, 2014

Why would the response time increase? Is it because the requests waiting to be processed accumulate in some kind of queue, which causes some overhead for the web app?

It's common that when a web server can't process any more simultaneous requests, it queues up incoming requests to be handled once the current ones are processed. In that case you would see increasing response times, but you could also start to see a lot of error responses if the requests are dropped due to a timeout before they get processed.

@ghost
Copy link
Author

ghost commented Aug 18, 2014

Awesome, thanks!

@ghost ghost closed this as completed Aug 18, 2014
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants