-
-
Notifications
You must be signed in to change notification settings - Fork 573
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WebServerPlugin ] Add support for Asynchronous response streaming #289
Comments
Hi @ahmedtalhakhan thanks for reporting such a scenario. I think what's happening here is, response is being queued but not getting flushed, since you have a What you will have to do is, make receiving from upstream socket nonblocking so that I have marked this ticket as enhancement as it's not really a bug :) But we should be able to create ways to handle such scenarios (more efficiently). |
@abhinavsingh that makes sense. But how do i make the receiving from upstream non-blocking with the current scenario. If I return from |
When you return from
So it should work, but I haven't tried it before (for non-websocket connections). Can you try asynchronous delivery of responses? For a quick proof-of-concept, ignore non-blocking sockets and simply try to dispatch a response chunk every second asynchronously and lemme know how it goes. Thank you!!! |
Hey @abhinavsingh. Thanks for clarity. The issue of core server not being able to flush can be resolved by calling the flush from within the loop. The above code can be changed to
I think the problem then becomes a little different. In the original code, the line seems to block for any upstream server when HTTP 1.1 is used because the upstream server never closes the socket. The only way at that point is to either set On your suggestion about "try to dispatch a response chunk every second asynchronously", did you mean launch some new thread/process within the |
I frankly won't recommend doing it, if you care about scalability and such. Problem is, now you will be calling flush explicitly, resulting in blocking IO call because client might not be ready.
Of course this is bound to happen, as business logic is now just a tunnel and it doesn't keep any state management i.e. by transparent tunneling, you have offloaded connection teardown to either upstream server or the client, because tunnel doesn't know when response has finished.
Yes. I think this is the way to go here. Start a separate thread (not process) and then repeatedly call |
Thanks @abhinavsingh I have the same problem, I'm proxying image so the standard plugin didn't work. When looping to send server data to the client , had a 10sec timeout after the server finished to send the data. Using a thread improved greatly the performance:
On the Web plugin, only need this now:
But a 1sec timeout remains, any idea why ? Edit:
proxy seems to wait 1 second before sending the data to the client. Do I have to do something to tell the core that the request is finished, go on ? |
@sebastiendarocha @ahmedtalhakhan Necessary support to allow async IO operation within I'll update the Due to synchronous operations within the plugin, See https://github.com/abhinavsingh/proxy.py/blob/develop/proxy/http/server/plugin.py#L41-L70 for new plugin callback methods. |
@sebastiendarocha @ahmedtalhakhan PTAL at #675 Thank you for your patience. |
Describe the bug
WebServerPlugin does not render all data from upstream in a timely manner
To Reproduce
Steps to reproduce the behavior:
proxy.py
as '--enable-webserver --plugin proxy.plugin.WebServerPlugin'Expected behavior
All response data from upstream should come through in a timely fashion. But instead the socket is held for sometime by the proxy/plugin even after reading upstream data and there is a delay in sending the data back to the client
Version information
Additional context
Following changes have been done to the WebServerPlugin. Note that the route has a
*
which is supposed to match all.The upstream server is hosted at local port 5678 and hosts a plain css file.
The modifications are inspired by the plugin reverse_proxy to reach to some upstream server but note that these modifications are required for this plugin to work correctly to process all data. The original code in the reverse_proxy plugin just does a single
conn.recv
call which is not enough to handle larger datasets.Now run the following curl command. Note the the css file is just a random css file of size 268K
The output of the curl command is the following. Note that the transfer took upwards of 5 seconds whereas the code has no such delay or sleep. By running the code with the debug prints, it becomes evident that data is read from the upstream socket very quickly, but it is not returned to the client for sometime.
The text was updated successfully, but these errors were encountered: