-
-
Notifications
You must be signed in to change notification settings - Fork 21k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HttpRequest now handles gzipping response bodies #38944
Conversation
I made a project to test this locally, but I get the following errors in the console once the request to The response is gzipped (you can check this in any browser's DevTools). |
@Calinou huh yup, i have a repo, looking into it. |
@Calinou ok I have fixed the problem. The initial PR was not allocating it's output buffer correctly, which became apparent when larger response bodies were being decompressed than what I had tested with. And in-fact, it never could allocate them correctly using the existing Http does not provide a mechanism for transmitting the expected uncompressed size of the response/request body. You have to stream decode it and allocate as you go. This is a supported pattern in zlib, it just takes a slightly difference approach than what was implemented in So I added a new method to I notice that the other methods in I followed the best practices laid out here but had to diverge a fair bit as they are just doing file-to-file decompression. |
647b0ec
to
2515c14
Compare
I haven't got time to do a full review yet, but I'm unsure about adding a new EDIT: At a first glance the PR looks good, I'll try to review it ASAP 🏅 . |
@Faless I had the same thought as I was implementing that in I can certainly refactor it to all just go through One other question: I was thinking it'd be nice to add the |
See the |
Ok, I updated HttpRequest so it doesn't duplicate code between I also added a switch lastly I added var plain := "Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request Test Gzipped Request "
var utf8Bytes := plain.to_utf8()
print("utf8Bytes: %d" % utf8Bytes.size())
var compressedBytes := utf8Bytes.compress(File.COMPRESSION_GZIP)
print("compressedBytes: %d" % compressedBytes.size())
var decompressedBytes := compressedBytes.decompress_dynamic(File.COMPRESSION_GZIP)
print("decompressedBytes: %d" % decompressedBytes.size())
print(decompressedBytes) However, it appears to fail compression, returning a zero length buffer:
|
func _on_HTTPRequest_request_completed(result, response_code, headers, body: PackedByteArray):
print("HTTP Response: %d" % response_code)
print(headers)
print("result: " + str(result))
print("Body Len: %d" % body.size())
print("Body to utf8")
var bodyStr := body.get_string_from_utf8()
print("Body to utf8 DONE")
print("Body UTF8 Len: %d" % bodyStr.length())
print("'%s'" % bodyStr) Output:
|
Huh, the most recent build failure I don't quite understand: Anyone who understands the inner workings of |
While working on this PR, I discovered that the GDScript methods for PackedByteArray were broken due to the switch away from the Pooled Array types. I have another PR here that fixes those methods: #39100 |
@Faless I don't know if you got a chance to take a look, but one of the earlier commits here does get rid of the redundant code in |
I had some time to do some more reading, and it turns out that gzipping request bodies is quite rare, and most servers don't support it out of the box. It is still done in some special cases, there is a Game Analytics service I use that tells you you should do it, but it's rare enough that I don't think it makes sense to include in the base class. As part of this PR I exposed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit concerned about decompress_dynamic
. I think it should still have a max_decompressed_size
.
IIRC gzip have a max compression factor of about 1024, which means a server could send 5 MiB that gets unpacked into 5 GiB (in memory), thus certainly crashing the client.
I know it's a very unlikely scenario, but maybe we could tie it to the body_size_limit
of HTTPRequest
(though by default it would still be -1
so unbounded).
Thanks for taking a look @Faless ! I fixed all of the trivial things. For the max body size, I have two thoughts, looking at line 377 of // No body len (-1) if chunked or no content-length header was provided.
// Change your webserver configuration if you want body len.
body_len = client->get_response_body_length();
if (body_size_limit >= 0 && body_len > body_size_limit) {
call_deferred("_request_done", RESULT_BODY_SIZE_LIMIT_EXCEEDED, response_code, response_headers, PackedByteArray());
return true;
} Maybe a check similar to: That has a big down side though. While gzip has a maximum compression ratio of ~1:1000, a more typical ratio for HTML seems to be about 1:10. So any magic number we plug in here really wouldn't handle the edge cases I don't think.
So my second idea is to enforce some limit during decompression. Maybe add a new |
Ok, I implemented the 2nd idea:
|
Yeah, that's good, there is still a styling issue reported by CI and also an integer comparison that needs a cast in |
Not quite sure how to squash with merge commits in the history. I know github can squash on merge, might that work? |
Merge commits are removed when rebasing. |
Thanks @akien-mga guess I should read that before makign future PRs :P Should be all squashed now. |
It's a little difficult to test at the moment, because PackedByteArray's So the response body can't be easily printed to the terminal at the moment. |
3f6f1d0
to
0b73d3a
Compare
0b73d3a
to
a12e30a
Compare
Added request_raw to HttpRequest Added decompress_dynamic to Compression class Added decompress_dynamic to BytePoolArray Merge doc fix revert
a12e30a
to
6584db1
Compare
@Faless I fixed the merge issue from a recent docs change in master, are there any remaining issues with the previous review & changes you had requested? |
@Wavesonics no, I think this is good now, but it's depending on #39100 (and possible changes to GDScript's |
#39100 should be good now (or at least I cleaned up the debug message that was there). What sort of changes were you thinking for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good! Great job 🥇 !
Was just looking for this, glad to see it implemented! 🎉 Any chance this is back-portable? |
It looks feasible while keeping backwards compatibility. The main difficult part is to refactor the copied code in |
HttpRequest will add an "Accepts-Encoding" header for gzip (only if the user has not already specified the same Header), allowing servers to send compressed payloads.
HttpRequest will detect "Content-Encoding" when receiving data, and will decompress gzip payloads, delivering only the decompressed data to the "request_completed" signal.
HttpClient::request_raw is now available to use via HttpRequest::request_raw. This allows users complete control over their request body. This is then used by this PR to send a GZipped body if the new property "gzip_request" is true. Not all servers support gzip request bodies, so "gzip_request" is defaulted to false.
This solves #6433
For testing I have a Godot project which talks to a locally running Http server written in Go. The Go server has 2 end points /plain which accepts non-compressed Request payloads. It respects Accept-Content headers, thus will send a response back which is gzipped.
The Godot project will print the response to the terminal, proving it was decompressed successfully by this change.
The second endpoint /gzip accepts a gzipped payload, decompresses it on the server, and then sends it back to the client. It also respects the encoding header, and thus will re-compress the body, and send it back to the client compressed.
Godot Test Project:
https://fugitivethegame.online/download/HttpGzipClient.zip
Go Http Server (Godot project talks to this when running locally):
https://fugitivethegame.online/download/HttpGzipServer.zip