Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should mixed content always be blocked? #813

Closed
delapuente opened this issue Jan 7, 2016 · 32 comments
Closed

Should mixed content always be blocked? #813

delapuente opened this issue Jan 7, 2016 · 32 comments
Labels
Milestone

Comments

@delapuente
Copy link

Currently, trying to fetch content served by HTTP inside a service worker results in the request being blocked by the UA. Is this the expected behaviour? If so, which parts of the current specification support it?

I found this comment in a Chrome bug, referring to that:
https://code.google.com/p/chromium/issues/detail?id=448672#c4

It says fetch is not an optionally blockable request context but the should fetching request be blocked as mixed content algorithm and should response to request be blocked as mixed content state passthrough requests should be allowed (maybe with an opaque response, but that's another story).

@wanderview
Copy link
Member

I believe the intent is to allow network requests that would originally passed mixed content checks to continue passing even if a SW proxies them with evt.respondWith(fetch(evt.request)). I just don't think anyone has implemented that yet.

@wanderview
Copy link
Member

I also think there are some possible issues with the current spec. The mixed content spec uses this "passthrough request" concept, but that does not exist in the fetch spec. In addition, its unclear if "passthrough request" is something that would be persisted in Cache API, etc.

@jakearchibald
Copy link
Contributor

We either prevent mixed content going into the cache, or need to provide a hook to show a mix warning on response, if the cache is returning a mix asset.

@annevk
Copy link
Member

annevk commented Jan 21, 2016

The response has a URL that can be used for that, no?

@delapuente
Copy link
Author

IMHO it should be allowed in the cache and we can use protocol part of the URL to say if its mixed content. We don't need the hook but simply to warn in the regular way.

@annevk
Copy link
Member

annevk commented Jan 21, 2016

Anyway, the main problem is what @wanderview referred to above. What "passthrough request" means and how that works. I once tried to sort this out with @mikewest but we didn't really follow through unfortunately. Probably my fault.

@delapuente
Copy link
Author

But passthrough request is a concept already defined:

More formally, request is a passthrough request if the following conditions hold [FETCH]:

  1. request’s initiator is fetch
  2. request’s window is an environment settings object (and, therefore, not no-window)
  3. request’s client’s global object is a ServiceWorkerGlobalScope object.

@mikewest
Copy link
Member

I thought we did follow through. If the MIX spec is incorrect, tell me and I'll fix it. I think it reflects what we discussed way back whenever we discussed this.

@wanderview
Copy link
Member

  1. request’s window is an environment settings object (and, therefore, not no-window)

How can this be persisted to Cache? In most cases the window would be long gone. So storing a passthrough request in Cache would strip is passthrough status.

But I guess it mostly doesn't matter if it can be persisted. You can do cache.match(passthroughRequest) to return a cached response. It will match the persisted request even though one is "passthrough" and the other is not. The respondWith() mixed content checking will use then use the original FetchEvent.request to determine if its "pass through" when determining if the response is ok.

I think the only problem is if you did a cache.keys() and then used one of the resulting requests in a fetch(). Then it would no longer be treated as passthrough and fail.

Right? Or have I confused myself again?

@delapuente
Copy link
Author

  1. request’s window is an environment settings object (and, therefore, not no-window)

How can this be persisted to Cache? In most cases the window would be long gone. So storing a passthrough request in Cache would strip is passthrough status.

Why do you need the window to be persisted? You only need it to decide it is passthrough or not. And you can not even serialize that a request is a passthrough request because this is something transitional, depending on initiator, window and client.

But I guess it mostly doesn't matter if it can be persisted. You can do cache.match(passthroughRequest) to return a cached response. It will match the persisted request even though one is "passthrough" and the other is not. The respondWith() mixed content checking will use then use the original FetchEvent.request to determine if its "pass through" when determining if the response is ok.

I think here, the procedure should be to act as if we were performing a fetch() from network, so match() will run the should fetching request be blocked as mixed content algorithm for the passed request (after checking if there was actually a match), and when retrieving the answer, it will run the should response to request be blocked as mixed content algorithm (although this would be possibly cached and serialized).

I think the only problem is if you did a cache.keys() and then used one of the resulting requests in a fetch(). Then it would no longer be treated as passthrough and fail.

I don't see what is the point here. Do you mind to clarify what is the problem in this case?

@annevk
Copy link
Member

annevk commented Jan 22, 2016

@wanderview you are correct. It might be okay that the latter case fails because "passthrough" shouldn't work if there's no window, for instance. So maybe there is no problem here.

@annevk
Copy link
Member

annevk commented Jan 22, 2016

@mikewest yeah, I guess I'm still unsure as to whether Fetch should have "passthrough" as a thing or not.

@jakearchibald
Copy link
Contributor

F2F resolution: We should allow passthrough of MIX content and adding to the cache.

We should look at persisting MIX warnings until storage is cleared.

@jakearchibald jakearchibald added this to the Version 1 milestone Jul 25, 2016
@jakearchibald
Copy link
Contributor

I know it's miserable, but I'd like to revisit this. If there's no implementor interest, we remove it and wait for more developers wanting it.

If we're keeping it, it seems a shame to persist MIX warnings until cache clear. If we can, we should hold back on warnings unless the cached items' body is read.

@jakearchibald
Copy link
Contributor

F2F:

  • We should show mixed content warnings if a request is made to HTTP
  • We should also show mixed content warnings if an HTTP response is used (by <img> or such), as this may not have involved a request as it may have came from the cache
  • Ditch the whole having to clear the cache thing

@mikewest
Copy link
Member

Do these conclusions require changes to MIX or not? Sorry, it's not at all clear to me what the impact of these three bullets actually is.

@annevk
Copy link
Member

annevk commented Jul 29, 2016

If we're keeping it, it seems a shame to persist MIX warnings until cache clear. If we can, we should hold back on warnings unless the cached items' body is read.

In a way, once you have used mixed content everything you do is tainted from that point forward. I don't understand how the browser can show there not being mixed content.

Basically, your argument appears to be that the bit of information you get from whether that request went successful or not does not influence the security of the user. I'm not sure we can make that determination.

@jakearchibald
Copy link
Contributor

@mikewest I'll go through the spec and file an issue against MIX if it looks like changes are needed (and of course let us know if we're wrong).

@annevk It seemed like it was previously suggested that the page would show mixed warnings in a fresh navigation simply because there was a mixed opaque response in the cache. This seems too severe. I realise that you now know if the request was successful or not, but I can retain that information in localstorage and avoid the forever-warning penalty. In fact, I can load a mix image, get its width/height, store that in IDB, and use that in future navigation without penalty.

@annevk
Copy link
Member

annevk commented Jul 29, 2016

Yeah, maybe we should show a warning there too. It is rather weird that we don't I think.

@jakearchibald
Copy link
Contributor

Given that we probably can't track when an <img>'s intrinsic width goes into localstorage, are you suggesting a MIX tainting is set for the whole origin until all storage is cleared?

@annevk
Copy link
Member

annevk commented Jul 29, 2016

Yeah, basically. Once you get tainted it requires cookie clearance by the user to get rid of it (or sufficiently wide Clear-Site-Data).

@wanderview
Copy link
Member

Yeah, basically. Once you get tainted it requires cookie clearance by the user to get rid of it (or sufficiently wide Clear-Site-Data).

This basically makes it impossible to build an email site like gmail, no? It will just always be showing mixed content warnings if it ever allows third party images to be loaded.

@annevk
Copy link
Member

annevk commented Aug 2, 2016

That is why Gmail proxies. Also for privacy reasons.

@jakearchibald
Copy link
Contributor

The specs already handle what we spoke about in the F2F so I'm closing this. In terms of origin-tainting, let's continue in w3c/webappsec-mixed-content#7

@delapuente
Copy link
Author

It is not clear to me if we will allow to perform a http request from a service worker. In current Chrome we still reject the fetch when trying to do it. Try it here: https://serviceworke.rs/fetching/ and see results for cors / http and no-cors / http.

@wanderview
Copy link
Member

Its just not implemented yet.

@annevk
Copy link
Member

annevk commented Feb 20, 2018

To be clear, for future readers, we ended up always blocking mixed content in service workers.

@davidmaxwaterman
Copy link

To be clear, for future readers, we ended up always blocking mixed content in service workers.

I'm not sure that is 100% clear.

What does 'in service workers' actually mean?

Does it restrict requests made TO service workers (ie from fetch() calls on the main thread, or web workers)? Or, does it only restrict requests made BY service workers to the network using fetch() (or cache.add() etc)?

I'm currently battling a problem where the main thread is requesting something via http and I was hoping to use a service worker to rewrite the request to https; and then add() that to a cache, but I get the Mixed Content message.

I can't get my head around any reason for this. I can see that requests made to the network from anywhere (main thread, web workers, service workers) would be subject to this, but why is there a restriction to requests made to the service worker? I was operating under the assumption that the channel between the main thread and service worker was trusted.

Perhaps there isn't and I'm wrong about this. In case anyone was wondering, I'm happily corrected. If this isn't an appropriate place to ask, I'll happily delete.

@annevk
Copy link
Member

annevk commented Feb 22, 2022

The Mixed Content check happens at an earlier stage, that's why that happens. I think you are correct that theoretically it could be made to work. I recommend using https://w3c.github.io/webappsec-upgrade-insecure-requests/ to work around this.

@davidmaxwaterman
Copy link

davidmaxwaterman commented Feb 22, 2022

The Mixed Content check happens at an earlier stage, that's why that happens. I think you are correct that theoretically it could be made to work. I recommend using https://w3c.github.io/webappsec-upgrade-insecure-requests/ to work around this.

(Thanks for responding, and so timely)
Yes, I presumed it was, but am curious why...but, yeah.
Re the 'webappsec-upgrade' reference, if I read it correctly, it would seem like it's not viable:

https://w3c.github.io/webappsec-upgrade-insecure-requests/#goals

We have two servers, one is a cloudfront URL serving https from an s3 bucket (which always serves http); the other is an API endpoint that has also been upgraded to https (probably using cloudfront). So, our servers are all set up to serve https, but the code has hard-coded http references.
It looks to me like Content-Security-Policy: upgrade-insecure-requests will help with the S3 files, but not the API calls.

Is my understanding correct? It's all quite a lot to take in, tbh - but I'll read it all and see if I can grok it.

...or should I ask for both servers to send that header and it'll fix it? (NB, I'm front-end, so I need to communicate this to a back-end engineer). Maybe it's something that's easy just to try.

I should also point out that said b/e engineer has managed to recreate the project from map files and node-modules content etc (it's not an old project, just lost the git repo so only have built production code). So, I am balancing the desire to get something up quickly and getting a 'proper' solution later, against resigning myself to no 'quick solution' and putting all effort into the 'proper solution'. As such, feel free to advise accordingly.

@annevk
Copy link
Member

annevk commented Feb 22, 2022

The header needs to be served with the HTML document that has the http: references in it. It will work for any reference, including "API calls" made with XMLHttpRequest or fetch().

@davidmaxwaterman
Copy link

The header needs to be served with the HTML document that has the http: references in it. It will work for any reference, including "API calls" made with XMLHttpRequest or fetch().

Great tip, thanks! Potentially means it is quite straight forward.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

6 participants