Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use locust? #522

Closed
phuongnd08 opened this issue Jan 21, 2017 · 13 comments
Closed

How to use locust? #522

phuongnd08 opened this issue Jan 21, 2017 · 13 comments

Comments

@phuongnd08
Copy link

This is not about the step by step but rather about how to use locust in a useful way.
So I'm trying to get locust run with 400 concurrent users at hatch rate 100.
I use this settings to test again my rails application having 4 simple get request, it yield around 40 req per second. Then I use it to test a light weight python server launched with "python -m SimpleHTTPServer 8000" serving files size 1 byte, and the result is around 60 req per second. Does that mean my rails app is fast? Really to suprise to find that by serving 1 byte file the python SimpleHTTPServer can only do 60 req per second (I'm using a Mac Book with 16G RAM so machine performance is not an issue. Also my ulimit is 20k which is too high to constraint the test I would think).

@phuongnd08
Copy link
Author

I did another test against lighttpd server and surprised to find the req/s even worse:
screen shot 2017-01-21 at 2 05 55 pm

Does that mean locust is unable to hit more than 60 req/s no matter how fast the webserver is?

@phuongnd08
Copy link
Author

I tried wrk (although with less python-ish power). It easily bang 200 req/s against light httpd server and 80 req/s against our rails server. So start looking like a performance issue with locust to me.

@heyman
Copy link
Member

heyman commented Jan 21, 2017

How is the CPU/core usage of the Locust python process while you're running the test? Are you running a single Locust process? If so, what happens if you run a Locust master process, and multiple slave processes? If i recall correctly, wrk uses multiple cores which could perhaps explain the 4x performance if you're only running a single Locust process.

With that said, Locust will never reach as high RPS as any of the super simple HTTP load generators like apache bench or wrk. The USP and goal of Locust is to provide a way for developers to easily generate realistic load by defining user behaviour in code (since hardware is cheap, while developer time isn't). However, in really large scale load testing scenarios (e.g. when simulating hundreds of thousands simultaneous users), it might make sense to swap out Locust's default HTTP client (python-requests) in favour of a faster HTTP lib like geventhttpclient (https://github.com/gwik/geventhttpclient).

@phuongnd08
Copy link
Author

I'm using this: locust -f test-homepage.py --no-web -c 400 -r 100 -n 2000 --host=http://localhost:3000

How do I run multiple processes with it then? Quick check look like I need to use the web interface to be able to do this?

@phuongnd08
Copy link
Author

Just to correct myself, against the lighttpd server, wrk is banging around 9000 req/s while jmeter is banging 3000-7000 req/s. locust is not reaching close to 1% the capacity of these tools.

@cgoldberg
Copy link
Member

--host=http://localhost:3000

You are running locust on from the same machine as your server under test. Therefore, the load generator and your web server are just competing for resources, resulting in poor performance for both. Any throughput results you get will be unrealistic and unreliable using this topology. You should setup your host (the webserver) on a different machine residing on the same local network.

serving 1 byte file the python SimpleHTTPServer can only do 60 req per second

remember that SimpleHTTPServer is a single-threaded, single-process server. I'm not sure how much throughput it can handle on given hardware, but I doubt it is very high.

I'm using a Mac Book with 16G RAM so machine performance is not an issue

You shouldn't assume it's not an issue without monitoring resources during your tests... just knowing it has 16GB RAM tells you next to nothing... it's still a laptop with a processor built for power efficiency... a single machine running 400 concurrent users from a single mobile processor core would likely run into problems under significant load. You probably need a distributed configuration and several load generating machines to saturate your server under test. Always measure your resources to find the bottlenecks in your test bed.

Also, does your production server run from a single macbook? (I doubt it)
Does your production server run OSX? (I doubt it)
Most likely you run on a cluster of powerful servers that are tuned for high throughput and run a different operating system... not a laptop with OSX. Therefore, any results you obtain with this setup would be unrealistic.

looking like a performance issue with locust to me.

you are incorrect... you have a flawed testing environment.

@phuongnd08
Copy link
Author

That flawed testing environment is working fine with jmeter and wrk. So I'd rather think locust performance is pretty poor.

@heyman
Copy link
Member

heyman commented Jan 23, 2017

@phuongnd08 You are of course free to use whatever tool you want :). I'm not sure you read my whole reply:

With that said, Locust will never reach as high RPS as any of the super simple HTTP load generators like apache bench or wrk. The USP and goal of Locust is to provide a way for developers to easily generate realistic load by defining user behaviour in code (since hardware is cheap, while developer time isn't).

The idea that Locust's main purpose is to be able to generate as many requests/s with as little hardware as possible is a misconception. If that's your only goal you should go with something like apache bench.

@phuongnd08
Copy link
Author

@heyman Thanks for that. locust is good and I respect what has been done. Though for starter like me who will first try everything on a single machine, it doesn't provide me with reliable enough indicator like jmeter and wrk. Yes these tools are simpler but in term of giving a clear signal of whether our app is improving its performance, it gives me clearer result. I stand my point of view that locust performance is poor but I by no mean of undervalues what you are doing. I'm going with jmeter at the moment and I'm happy with it.

@heyman
Copy link
Member

heyman commented Jan 23, 2017

Saying Locust's performance is poor by comparing it's performance to wrk isn't really fair, since Locust is doing much more than just firing off HTTP requests. That would be like comparing serving static HTTP requests with nginx to a dynamic web app that does a whole lot more like rendering templates and querying a database.

@cgoldberg
Copy link
Member

in term of giving a clear signal of whether our app
is improving its performance, it gives me clearer result

@phuongnd08
With the test environment you described, things will perform wildly different than they would in a realistic test setup. It wouldn't be possible to draw any useful conclusions about your app's performance based on such results (no matter which tool you are using).

@phuongnd08
Copy link
Author

@cgoldberg I would not think like so, at least in my situation. Lighttpd does not take a lot of resource (wrk is getting 9000 req/s with it), why can't locust go beyond 80 req/s? Perhaps I can make it run a bit faster to 100 req/s by moving lighttpd to another machine but then it's still nothing compare to wrk. Despite the fact that wrk can compete resource with my app, it still give me signal to optimize my app (so the req/s is double when I make some tweaks). And yes, such conclusion is useful enough for me.

@allenling
Copy link

since there is a huge difference between req/s in locust and req/s in wrk, what exactly req/s means in locust?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants