-
Notifications
You must be signed in to change notification settings - Fork 239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
content-length for gzipped data? #52
Comments
Because the Your statement is a little weird, though, because chunked transfer encoding and the Transferring the body chunked, without a |
And of course, on top of that, the Node.js API for writing headers, |
it should not come as a surprise that the context here is everything works great, except IE, which seems to be treating our chunks as corrupted data. |
Weird, because I use IE a lot for testing and use this module without issue. Is there a way I can reproduce the issue? Just using the example on the README works just fine for me in IE 10. |
debugging atm, hopefully that leads to an MCVE. Or we find the problem along the way prove ourselves wrong in thinking it's connection()... will update once we know either way! |
Ok, keep me updated. I can test as well to help speed thing up (and prevent a never-closing issue). What version of Node.js are you using, version of IE, and version of Windows? |
Also, probably version of this module if it is not 1.5.2. |
This discussion started because of https://github.com/mozilla/thimble.webmaker.org/issues/1064. We're sending a TAR file (stream) to the browser, and the Here's a minimal example to reproduce the issue: var express = require("express");
var compression = require("compression");
var tarStream = require("tar-stream");
var app = express();
app.use(compression());
function buildTarStream() {
var pack = tarStream.pack();
pack.entry({name: "/index.html"}, "!<DOCTYPE html><html><head><title>Hell World</title></head><body></body></html>");
pack.entry({name: "/style.css"}, "p { color: red; }");
pack.finalize();
return pack;
}
app.get("/", function(req, res) {
var stream = buildTarStream();
res.type("application/x-tar");
stream.pipe(res);
});
app.listen(9000, function() {
console.log("Server running on port 9000");
}); When I hit this with IE 11, I get a ~1K file compared with the ~3K file I get with compression turned off. It appears to me that IE isn't decompressing the file. It's also quite possible that I'm just doing something wrong here, and if you notice something that doesn't make sense, I'd appreciate any advice. At the least, doing what I think is the obvious thing doesn't work in IE, and it might point at a bug or doc fix that could help other users. |
Hi @humphd , can you send me a Fiddler trace of the request/response when you hit this in IE11? You can upload it somewhere or simply email me directly with it (my email is in my GitHub profile). |
@humphd , I also need the answers to the following questions: What version of Node.js are you using, version of IE, version of Windows, and version of this module are you using? |
Let me figure out how to run Fiddler and I'll post that next. |
Cool. Looking at your example so far, the actual response stream it spits out, as captured by Whiteshark, is 100% spec-compliant and works fine (I assume this is why all browsers function just fine). My guess here is that the ArrayBuffer you are getting in IE11 contains the compressed tar, rather than the expanded tar, i.e. perhaps there is a bug in IE11 where in certain cases it does not apply the When you get that ArrayBuffer, what are the first two bytes? 1f 8b or something else? |
I agree with you, and also suspect the bug is on the IE side. In Chrome on Windows I get |
The two bytes 1f 8b are the gzip header, so that means in your ArrayBuffer you do indeed have the gzip'd tar. This means, unfortunately, you have only the following choices:
a. In your XMLHTTPRequest, try seeing if you can just not send the Accept-Encoding header for IE11, so then IE11 will just not have compressed responses. I can help you with writing code for any of these, if you decide which you want to implement. |
@dougwilson, thanks a lot for being so willing to help with this. It's rare to come across a maintainer that is willing to go down a rabbit hole like this. Based on what we were seeing, I did another experiment, and removing the So maybe IE special-cases what it thinks is a compressed stream, and leaves it compressed. I'm not sure if you have docs somewhere that this tidbit could go, but it certainly seems like someone else will hit it at some point. |
Hi @humphd , it is no problem at all! Your information regarding the behavior in relation to the I'm going to look around online to see if I can find some information regarding this, perhaps a reason or at least a deep dive/full description of what the heck is going on :) This would help a lot if we were to add a note to the README for people to understand what is happening (also reduces question here, since we're just pointing to something instead of it looking like we just came up with that information). |
Talked to MS engineers about this, and was told that they've filed a bug internally to deal with it. |
Interesting, @humphd , that it's actually a bug :) It was almost seeming like they did it on purpose for some unknown reason. So if this is truly a bug, it is pretty much very unlikely that we'll end up documenting it on this module, as otherwise the README ends up being a global collection of bug information. If there is a good site/page that we can point to that is a global collection of bug information, I can definitely add that link to the README :) ! |
I would also like content to be gzipped on the fly by I don't want Cloudfront serving half retrieved files because it couldn't use the Content-Length to verify that it had the whole file. None of the files I want to serve are in the GB range (referring to the reason given by @dougwilson why this can't be achieved). Would it not be possible to add the functionality and simply warn users in the documentation that this has a size limit? |
No. If you want that functionality, use or make a middleware that will "de-chunkify" your responses. Then you don't have to convince all modules that send chunked responses to add an option not to do that. |
Is there a way to get the content-length value for data that was run through compression, so that even in chucked/varied transfer mode, the final content length can be send along for browsers that rely on that value to properly terminate the transfer?
The text was updated successfully, but these errors were encountered: