Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A way to limit the number of packets sent in a single polling request #1531

Closed
maxfliri opened this issue Feb 28, 2022 · 5 comments
Closed
Labels
enhancement New feature or request
Milestone

Comments

@maxfliri
Copy link

Is your feature request related to a problem? Please describe.

With transport polling, the number of packets in the write buffer can grow very large, to the point that the next request becomes larger than the maxHttpBufferSize setting on the server. When this happens, the request fails and the client gets disconnected.

Describe the solution you'd like

It would be useful to have a way to limit the number of packets sent in a single request to the server. This could be a new option to set the maximum number of packets to send in a single request. Eg.

const socket = io(serverUrl, { maxPacketsPerRequest: 200 });

At the moment of sending the request, the client takes from the buffer and send a number of packets up to maxPacketsPerRequest; if the buffer contains more packets than this value, the remaining ones are left in the buffer and sent at the next cycle.

Alternatively, the limit could be based on the size of the request: the client takes packets from the buffer up the point where the resulting request reaches a maximum payload size. The limit could then be set to the same value as maxHttpBufferSize on the server, ensuring that the client never sends too large a request. This is probably harder to build though, because you need to predict the serialised size of every packet.

@maxfliri maxfliri added the enhancement New feature or request label Feb 28, 2022
@darrachequesne
Copy link
Member

Hi! You are right, I was indeed able to reproduce the issue.

Note: This does not apply to the WebSocket transport as each packet is sent in its own frame, but since the client connects first in HTTP long-polling, this is a real issue.

If I understand correctly:

  • the 1st solution relies on a max number of packets per HTTP request, which should solve the issue in most cases but setting a limit sounds a bit arbitrary
  • the 2nd solution seems cleaner, but requires an update to the Engine.IO protocol in order to include the maxHttpBufferSize value in the handshake sent to the client upon connection.

I'd rather go for the 2nd solution, what do you think?

Related: socketio/socket.io#3946

@maxfliri
Copy link
Author

maxfliri commented Mar 7, 2022

2nd solution would be ideal; I suggested as second option because I thought it could be significantly more complicated, in particular because of the complexities of predicting the size of the payload before serialization – I don't know in detail how this works in socket.io, but in my experience this tend to become quite complicated when you start dealing with charsets, compression, etc.

So yes, 2nd option is overall better; but if that turns out to be unfeasible, or require too much effort, the 1st option would work as well for my specific situation. And as a user of the library, an imperfect but workable solution available sooner is more valuable than a better solution in 6 months or more. 😄

Regarding your note about long-polling: our application has some users using long-polling only, typically because they are unable to upgrade to websocket due to firewall or proxy restrictions; for these users, this issue can happen at any time, in fact this is how I noticed the problem.

darrachequesne added a commit to socketio/engine.io that referenced this issue Mar 10, 2022
So that clients in HTTP long-polling can decide how many packets they
have to send to stay under the maxHttpBufferSize value.

This is a backward compatible change which should not mandate a new
major revision of the protocol (we stay in v4), as we only add a field
in the JSON-encoded handshake data:

```
0{"sid":"lv_VI97HAXpY6yYWAAAC","upgrades":["websocket"],"pingInterval":25000,"pingTimeout":5000,"maxPayload":1000000}
```

Related: socketio/socket.io-client#1531
darrachequesne added a commit to socketio/engine.io-client that referenced this issue Mar 12, 2022
The server will now include a "maxPayload" field in the handshake
details, allowing the clients to decide how many packets they have to
send to stay under the maxHttpBufferSize value.

Related:

- socketio/socket.io-client#1531
- socketio/engine.io@088dcb4
@darrachequesne
Copy link
Member

Here we go! So, we went with the 2nd solution:

I have a few other changes to make, I hope to release this in a few days.

@maxfliri
Copy link
Author

I just upgraded to version 4.5.0 and tested, it works great! Thanks for fixing this.

@darrachequesne
Copy link
Member

Awesome, thanks for the feedback ❤️

@darrachequesne darrachequesne added this to the 4.5.0 milestone Apr 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants