Skip to content
This repository has been archived by the owner on May 24, 2022. It is now read-only.

Support of memcached #29

Closed

Conversation

vincentbernat
Copy link
Contributor

Hi!

Here is a way to share sessions between different hosts using memcached. There are two major drawbacks:

  1. There seems to be a memory leak (with libmemcached 0.44) but I am not able to pinpoint it. I don't see where it could happen.
  2. Storing a new session is done asynchronously. However, getting a session from memcached is done synchronously. Therefore, there is a major performance drawback. Unfortunately, there is no way to register an asynchronous callback with OpenSSL. Either we could use some threads (eeek) or insert remote sessions into the local cache. I don't know if memcached allows something like this. We may switch to something that would broadcast insertions to all stud instances.

So, this is just a proof of concept.

@gyepisam
Copy link
Contributor

gyepisam commented Oct 3, 2011

This is an interesting concept. I had thought about extending the session cache to a networked database so that ssl sessions could fail over with the load balancer as necessary. I usually run load balancers in active/backup mode, so I was more interested in using a database with replication (something like tokyo tyrant). In this context, a standard memcached would be a single point of failure. However, tokyo tyrant supports the memcache protocol (with some overheard) so all is not lost. I'll have to test out your memcache extensions in this context.

@vincentbernat
Copy link
Contributor Author

You can specify several memcached servers. I suppose they keep themselves in sync.

@jamwt
Copy link
Member

jamwt commented Oct 12, 2011

So, I've thought about this quite a bit, and how to get around the "session retrieval cannot block" problem...

My idea is to use something like redis pub/sub using hiredis.

  1. Two background threads run on every child.
  2. One listens on a thread-safe queue for serialized session objects. When one of those session objects is created or used by the main thread, it is put on this queue; the thread pops it and PUBLISHES it to a gossip channel.
  3. The second threads SUBSCRIBEs to the gossip channel. When any serialized session is broadcast, it puts it (or renews it) on a mutex-locked LRU structure, using something like http://jehiah.cz/a/uthash .
  4. On the main thread, SSL_CTX_sess_set_new_cb just pushes the session onto the thread-safe queue--nonblocking
  5. On the main thread, SSL_CTX_sess_set_get_cb checks the LRU item for the session. If it exists, it uses it (and places it on the thread-safe queue for renewal). If it doesn't exist, oh well. Also nonblocking.
  6. The failure mode for redis is just that the background threads cycle, periodically retrying connection to the pub/sub server, and the caches stay stagnant until redis recovers. Oh well...

.. this would create quite a busy gossip channel between N children on M machines--but redis (for example) can push 100k messages a second or so, which is quite a bit of headroom. And it avoids blocking the main thread on a network request.

I have an early version of this that isn't quite working yet, but it's the best I've come up with yet.

That being said, I'm reluctant to pull in anything that would block stud's I/O thread--that pretty much entirely defeats the execution model.

@vincentbernat
Copy link
Contributor Author

If you come up with a working branch, I would be happy to test for performance regression.

@jamwt
Copy link
Member

jamwt commented Nov 2, 2011

See #50 -- Emeric from Exceliance just tackled this problem in a very direct way.

@vincentbernat
Copy link
Contributor Author

Yes, this seems far better. Feel free to close this push request.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants