Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MasterRunner target_user_count no longer set for test_start event listeners #1883

Closed
jrweldon opened this issue Sep 14, 2021 · 8 comments · Fixed by #1891 or #1894
Closed

MasterRunner target_user_count no longer set for test_start event listeners #1883

jrweldon opened this issue Sep 14, 2021 · 8 comments · Fixed by #1891 or #1894
Labels

Comments

@jrweldon
Copy link

jrweldon commented Sep 14, 2021

Describe the bug

I was following the custom_messages.py example and noticed that environment.runner.target_user_count is no longer resolving to the number of users being created for the test. This functionality appears to have been broken between tags 2.0.0b1 and 2.0.0b2.

Expected behavior

The MasterRunner's target_user_count should resolve to the number of users specified in the WebUI for the test.
The custom_messages.py example for a test with 10 users and 2 workers should log a message from each worker indicating that they received 5 users.

master_1  | [2021-09-14 02:48:02,550] d4b6940eebd0/INFO/locust.runners: Sending spawn jobs of 10 users at 1.00 spawn rate to 2 ready clients
master_1  | Thanks for the 5 users!
master_1  | Thanks for the 5 users!
worker_1  | User4
worker_2  | User9
worker_1  | User3
...

Actual behavior

The MasterRunner's target_user_count is 0, causing numerous exceptions in the custom_messages.py example.

master_1  | [2021-09-14 03:01:33,449] 0134f51ea2b2/INFO/locust.runners: Sending spawn jobs of 10 users at 1.00 spawn rate to 2 ready clients
master_1  | Thanks for the 0 users!
master_1  | Thanks for the 0 users!
worker_2  | Traceback (most recent call last):
worker_2  |   File "src/gevent/greenlet.py", line 906, in gevent._gevent_cgreenlet.Greenlet.run
worker_2  |   File "/usr/local/lib/python3.8/site-packages/locust/runners.py", line 1138, in <lambda>
worker_2  |     self.spawning_greenlet = self.greenlet.spawn(lambda: self.start_worker(job["user_classes_count"]))
worker_2  |   File "/usr/local/lib/python3.8/site-packages/locust/runners.py", line 1075, in start_worker
worker_2  |     self.spawn_users(user_classes_spawn_count)
worker_2  |   File "/usr/local/lib/python3.8/site-packages/locust/runners.py", line 221, in spawn_users
worker_2  |     new_users += spawn(user_class, spawn_count)
worker_2  |   File "/usr/local/lib/python3.8/site-packages/locust/runners.py", line 210, in spawn
worker_2  |     new_user = self.user_classes_by_name[user_class](self.environment)
worker_2  |   File "/mnt/locust/custom_messages.py", line 55, in __init__
worker_2  |     self.username = usernames.pop()
worker_2  | IndexError: pop from empty list
worker_2  | 2021-09-14T03:01:33Z <Greenlet at 0x7fc070a7ae10: <lambda>> failed with IndexError
...

Steps to reproduce

  1. Create docker-compose.yml:
version: '3'

services:
  master:
    image: locustio/locust:2.2.1
    # image: locustio/locust:2.0.0b1 ## Last known working version
    ports:
     - "8089:8089"
    volumes:
      - ./:/mnt/locust
    command: -f /mnt/locust/custom_messages.py --master -H http://master:8089
  
  worker:
    image: locustio/locust:2.2.1
    # image: locustio/locust:2.0.0b1 ## Last known working version
    volumes:
      - ./:/mnt/locust
    command: -f /mnt/locust/custom_messages.py --worker --master-host master
    deploy:
      mode: replicated
      replicas: 2
  1. Copy the existing examples/custom_messages.py into the same directory as the docker-compose.yml
  2. Run docker compose up
  3. Visit http://localhost:8089/ and start a test with 10 users
  4. (Optional) Modify the docker-compose.yml to use a different image tag listed below and repeat steps 3 and 4.

Environment

  • OS: Debian GNU/Linux 11 (bullseye) from locustio/locust docker image.
  • Python version: 3.8.11 and 3.8.12 from the various locustio/locust docker images.
  • Locust version: 2.0.0b2, 2.0.0b3, 2.1.0, 2.2.0, 2.2.1
  • Locust command line that you ran: See docker-compose.yml above.
  • Locust file contents (anonymized if necessary): examples/custom_messages.py
@jrweldon jrweldon added the bug label Sep 14, 2021
@cyberw
Copy link
Collaborator

cyberw commented Sep 14, 2021

Thanks for the nice and detailed description. I think this is either @mboutet 's area (the only relevant PR between b1 and b2 was made by him) or @nathan-beam 's as he wrote the custom messages feature.

@mboutet
Copy link
Contributor

mboutet commented Sep 14, 2021

I see where the issue is coming from. The relevant changes are here.

Before 2.0.0b2, the "weighing" of users was done before the dispatch to workers took place. However, in #1809, the weighing is done on-the-fly during dispatch using a smooth weighted round robin generator. As seen in the above linked diff, target_user_classes_count is only set after the ramp-down/up is finished. I agree this is not ideal because it makes sense to know what is the target before actually reaching this target.

Switching to this approach of weighing on-the-fly was one of the key point in addressing the performance issues while addressing the other issues in #1621.

Right now, the target_user_count is a property computed from target_user_classes_count. So we could add:

self.target_user_classes_count = self._users_dispatcher._distribute_users(user_count)[0]

at line:


The downside is that calling _distribute_users for a large user count (I'd say more than 30k-50k users) can be computationally expensive, but I think for most use cases, this should not be a problem.

Another solution could be to no longer compute target_user_count from target_user_classes_count. But I don't feel that this solution solves the problem for real because the day someone needs target_user_classes_count in a custom message instead of target_user_count, we'll have the same problem.

@cyberw
Copy link
Collaborator

cyberw commented Sep 14, 2021

Hmm... I think your second proposed solution, while a little more limited in what flexibility we give the user, is better. Doing all that work just to get the target user count (which already exists) just doesnt feel right.

I think it is reasonable to assume that 99% of tests will not require exact knowledge of the count of each user class. If that is necessary, then maybe it is something we could allow the locustfile to calculate only if it is needed.

@jrweldon
Copy link
Author

If there's another mechanism for retrieving the target user count that was entered in the WebUI or passed in via the -u/--users CLI arg in a test_start event listener, that would similarly solve my issue.
I'm not too savvy with Python, though I did try inspect.getmembers(...) on various objects that were available to the function, but could not find anything that contained the value.

@cyberw
Copy link
Collaborator

cyberw commented Sep 15, 2021

As a workaround, if you specify the number of users on command line you can get it from environment.parsed_options.users. I'm not sure, but it could maybe be argued that the web ui should overwrite that value when the test is started from there.

@OmonoBlue
Copy link

Hello, I'm having the exact same issue, even after updating to 2.2.3.
What would be an alternative to use instead of target_user_count? I'm using environment.parsed_options.num_users for the time being, but it doesn't update if you change the user count in the web UI.

@jrweldon
Copy link
Author

jrweldon commented Sep 27, 2021

Thanks for the fix!

While the PR did provide a solution that works for my use-case, the examples/custom_messages.py still fails to run. Previously, the MasterRunner's target_user_count would resolve to the total number of users desired for the test as a whole (i.e, the value passed in the CLI or via the WebUI). With the fix, that value is still 0 on the MasterRunner.

I'm unsure whether or not that's a behavior you want to put back in. If there's no plan to add that back in, then the examples/custom_messages.py should be updated to work with the current logic.

Thanks again.

UPDATE:
For the MasterRunner, it's not getting set prior to invoking the test_start listener, causing it to report the prior count that the MasterRunner had (e.g, operator stops the test and starts a new one while the Locust cluster is still running).
Note: I added print(f"Target User Count: {environment.runner.target_user_count}") at the top of the on_test_start method just to see what the values were.

Output from my previous instructions + stop and restart the test with 10 users again, then stop and restart the test with 7 users. Note that for the final run with 7, the logs report that the MasterRunners target_user_count was 10 (from the prior run), though the WorkerRunners reported 3 and 4, to equal the 7 requested.

❯ docker compose up
[+] Running 3/3
 ⠿ Container locust_worker_2  Started            0.7s
 ⠿ Container locust_master_1  Started            0.7s
 ⠿ Container locust_worker_1  Started            0.7s
Attaching to master_1, worker_1, worker_2
master_1  | [2021-09-27 13:51:24,736] 821f61f62fc1/INFO/locust.main: Starting web interface at http://0.0.0.0:8089 (accepting connections from all network interfaces)
worker_2  | [2021-09-27 13:51:24,751] 896f2b598413/INFO/locust.main: Starting Locust 2.2.4.dev11
master_1  | [2021-09-27 13:51:24,755] 821f61f62fc1/INFO/locust.main: Starting Locust 2.2.4.dev11
master_1  | [2021-09-27 13:51:24,756] 821f61f62fc1/INFO/locust.runners: Client '896f2b598413_7fba274473c2486ca7ea8cf0c6ba8301' reported as ready. Currently 1 clients ready to swarm.
worker_1  | [2021-09-27 13:51:24,764] 2dc9f1ec7b93/INFO/locust.main: Starting Locust 2.2.4.dev11
master_1  | [2021-09-27 13:51:24,767] 821f61f62fc1/INFO/locust.runners: Client '2dc9f1ec7b93_d55c0917863a4b569b9d7d6b80287cfb' reported as ready. Currently 2 clients ready to swarm.
master_1  | [2021-09-27 13:51:36,970] 821f61f62fc1/INFO/locust.runners: Sending spawn jobs of 10 users at 10.00 spawn rate to 2 ready clients
master_1  | Target User Count: 0
master_1  | Thanks for the 0 users!
master_1  | Thanks for the 0 users!
worker_2  | Target User Count: 5
worker_2  | 2021-09-27T13:51:37Z <Greenlet at 0x7f06820fae10: <lambda>> failed with IndexError
<REDACTED/unnecessary stacktrace>
worker_1  | Target User Count: 5
worker_1  | 2021-09-27T13:51:37Z <Greenlet at 0x7ff27e42ce10: <lambda>> failed with IndexError
<REDACTED/unnecessary stacktrace>
master_1  | [2021-09-27 13:51:37,074] 821f61f62fc1/INFO/locust.runners: All users spawned: {"WebsiteUser": 0} (0 total users)
master_1  | [2021-09-27 13:51:45,061] 821f61f62fc1/INFO/locust.runners: Removing 896f2b598413_7fba274473c2486ca7ea8cf0c6ba8301 client from running clients
master_1  | [2021-09-27 13:51:45,061] 821f61f62fc1/INFO/locust.runners: Removing 2dc9f1ec7b93_d55c0917863a4b569b9d7d6b80287cfb client from running clients
master_1  | [2021-09-27 13:51:45,062] 821f61f62fc1/INFO/locust.runners: Client '896f2b598413_7fba274473c2486ca7ea8cf0c6ba8301' reported as ready. Currently 1 clients ready to swarm.
master_1  | [2021-09-27 13:51:45,062] 821f61f62fc1/INFO/locust.runners: Client '2dc9f1ec7b93_d55c0917863a4b569b9d7d6b80287cfb' reported as ready. Currently 2 clients ready to swarm.
master_1  | [2021-09-27 13:55:01,080] 821f61f62fc1/INFO/locust.runners: Sending spawn jobs of 10 users at 10.00 spawn rate to 2 ready clients
master_1  | Target User Count: 10
master_1  | Thanks for the 5 users!
master_1  | Thanks for the 5 users!
worker_2  | Target User Count: 5
<REDACTED/unnecessary logs from test running>
worker_1  | Target User Count: 5
<REDACTED/unnecessary logs from test running>
master_1  | [2021-09-27 13:55:01,158] 821f61f62fc1/INFO/locust.runners: All users spawned: {"WebsiteUser": 10} (10 total users)
master_1  | [2021-09-27 13:55:02,536] 821f61f62fc1/INFO/locust.runners: Removing 896f2b598413_7fba274473c2486ca7ea8cf0c6ba8301 client from running clients
master_1  | [2021-09-27 13:55:02,536] 821f61f62fc1/INFO/locust.runners: Removing 2dc9f1ec7b93_d55c0917863a4b569b9d7d6b80287cfb client from running clients
master_1  | [2021-09-27 13:55:02,536] 821f61f62fc1/INFO/locust.runners: Client '896f2b598413_7fba274473c2486ca7ea8cf0c6ba8301' reported as ready. Currently 1 clients ready to swarm.
master_1  | [2021-09-27 13:55:02,537] 821f61f62fc1/INFO/locust.runners: Client '2dc9f1ec7b93_d55c0917863a4b569b9d7d6b80287cfb' reported as ready. Currently 2 clients ready to swarm.
master_1  | Target User Count: 10
master_1  | [2021-09-27 14:00:51,122] 821f61f62fc1/INFO/locust.runners: Sending spawn jobs of 7 users at 7.00 spawn rate to 2 ready clients
master_1  | Thanks for the 5 users!
master_1  | Thanks for the 5 users!
worker_2  | Target User Count: 3
<REDACTED/unnecessary logs from test running>
worker_1  | Target User Count: 4
<REDACTED/unnecessary logs from test running>
master_1  | [2021-09-27 14:00:51,200] 821f61f62fc1/INFO/locust.runners: All users spawned: {"WebsiteUser": 7} (7 total users)

@jrweldon
Copy link
Author

Thanks again! You all are awesome. Just confirmed that everything is functioning as expected using the latest on master.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
4 participants