Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dealing with disconnects doesn't recyle threads #151

Closed
ghost opened this issue Jun 5, 2017 · 3 comments
Closed

dealing with disconnects doesn't recyle threads #151

ghost opened this issue Jun 5, 2017 · 3 comments

Comments

@ghost
Copy link

ghost commented Jun 5, 2017

In http://jasonrbriggs.github.io/stomp.py/api.html#dealing-with-disconnects a new thread is created on the reconnect - this may cause the box to max out of allowed threads or memory.
It would be great if there was a way to do this whilst recycling the thread.

@jasonrbriggs
Copy link
Owner

Sorry rather ridiculously delayed reply. I don't think thread recycling is required. A disconnect should result in the receiver_loop (effectively the run loop) completing, which would terminate a normal thread. The restart then creates a new thread and associates it with a new call to the receiver loop. So under normal conditions, I don't think you'd ever max out the threads (although I guess there could be a gap in the code where receiver loop(s) never complete which means you'd exhaust threads at some point -- do you have a specific example?)

One gap might be thread pooling, if the client implements override_threading on the transport -- if there needs to be an explicit call to return a thread to the pool on completion. That's something I guess could be implemented, but would need to see a example there to figure out if it's really a common pattern...

@jasonrbriggs
Copy link
Owner

Did some more reading. I don't see this being something that should be handled by stomp.py. Take ThreadPoolExecutor as an example -- there's no way to recycle threads. You submit a job (func) to the executor, it runs to completion. There's no methods to return a thread to the pool. If there's a threadpool library out there that does require manual recycling, then that should be handled by the client.
In terms of memory leakage, the only issue I see is a disconnect that's handled outside the receiver loop. There's a chance that disconnecting the socket (which sets running=False) could result in an original receiver_loop not exiting before the next call to start sets running=True, meaning you end up with two threads, two receiver loops. However you could get around this by waiting, in the client code, on the running variable of the transport (e.g. conn.transport.running) to be False, then triggering start+connect+subscribe again. I think again that's better handled in the client.

@kannanF9T
Copy link

kannanF9T commented Nov 11, 2020

@jasonrbriggs i am facing similar kind of memory leak issue randomly ("double free or corruption"), Which kills the python process.
Do we need to check transaport.running variable to false before establish new connection even if we are checking conn.is_connected()
in my case i will reconnect to different activemq broker which is configured as failover.
we have configured connection over ssl (stomp+ssl).
Need your suggestions on this

pseudo code:

in main script i have:

      while True:
           while  self.conn.is_connected():
                 time.sleep(5)
           self.create_connection()
           self.subscribe()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants