-
Notifications
You must be signed in to change notification settings - Fork 645
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Redis database grew too big, causing general Redis troubles #271
Comments
After this "reset" yesterday, todays
When I look at the output from yesterday again, something feels very off here: |
Todays
I'm going to watch this a few days. One thing I remember we did before the problem: we've enabled lots of "Monitor Tags". Basically one for every job-type we have. We didnd't do this yet after we purged the database. Can this be connected? |
I arrived here after having all out "Monitor Tags" disappear, and I think it might be related to this. We're processing hundreds of jobs per minute and the Monitoring tab would show thousands (some around 100k) of entries in the Jobs column, and now there's no tag being monitored. EDIT: Maybe Horizon could add a counter type of monitor. So it'd only increment the counter, instead of keeping a record of all the jobs. At least that's what I needed in this case. |
We've never enabled "tagging" for each job and the problem never appeared. I didn't bother to investigate further as we really didn't need the detailed metrics (it was just "nice to have"). |
Had the same problem occuring already three times. The horizon redis database is always growing. Has anybody a solution for this? |
@ndberg after we disabled tagging, we never had this problem again. But OTOH: I don't recall creating so many jobs at once again either (>1 mio) |
So I should test it with disabling tagging.. I have used tags for alle Jobs, and I have a similar environment as you, with less queues and workers: 3 queues |
This could be solved by #333. I'll keep this open for now so we don't lose track of it. |
Today we had the following issue:
Errors we received were:
Connection closed
Redis::pconnect(): connect() failed: Connection timed out
RedisException: read error on connection
(note: this error were recorded from non-Laravel PHP based applications; i.e. as explained the below, the Horizon database size seemed to affect Redis as a whole).
What we did:
flushdb
in the horizon database (Achtung: be sure you've selected the right database)Running that command took 40 seconds
In our case the Horizon Redis database was number
2
, here's the output from redis internalinfo
command:The
avg_ttl
looks suspiciously high.Usually when inspecting such problems, I use the
keys *
command but I didn't dare to run it as it would block redis completely during running it and we couldn't do this in production.As such, at this time we don't have any information what keys where in there.
We can definitely exclude other applications having written to the same database; it's exclusively used by Horizon.
Our configuration:
horizon:snapshot
running every 5 minutesDoes anyone have a clue what could cause this?
We were approximately running this since 2 months in production now.
The text was updated successfully, but these errors were encountered: