Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sticky Ingress annotations don't appear to work at all #771

Closed
jakexks opened this issue May 26, 2017 · 12 comments · Fixed by #871
Closed

Sticky Ingress annotations don't appear to work at all #771

jakexks opened this issue May 26, 2017 · 12 comments · Fixed by #871

Comments

@jakexks
Copy link

jakexks commented May 26, 2017

Image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.6

After deploying the ingress controller and default backend, I created a test ingress object based on the example, but the sticky session cookie isn't set. Anything I'm doing wrong?

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-test
  annotations:
    ingress.kubernetes.io/affinity: "cookie"
    ingress.kubernetes.io/session-cookie-name: "blah"
    ingress.kubernetes.io/session-cookie-hash: "sha1"

spec:
  rules:
  - host: stickyingress.example.com
    http:
      paths:
      - backend:
          serviceName: default-http-backend
          servicePort: 80
        path: /
curl -i -H 'Host: stickyingress.example.com' http://10.100.2.71
HTTP/1.1 404 Not Found
Server: nginx/1.13.0
Date: Fri, 26 May 2017 10:48:03 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 21
Connection: keep-alive

default backend - 404
@aledbf
Copy link
Member

aledbf commented May 26, 2017

@rikatz can you help to debug this issue?

@rikatz
Copy link
Contributor

rikatz commented May 26, 2017

Sure, just checking here

@rikatz
Copy link
Contributor

rikatz commented May 26, 2017

@jakexks Can you please post your '/etc/nginx/nginx.conf' from Ingress Controller into some Pastebin (or even here) and sent it to me? Also please send the start directive (/nginx-ingress-controller or even the yaml used to deploy the ingress).

I've seen also that you're trying to use the same service from Default Backend into your ingress. As Default Backend service is created not from annotations (like ingress) but directly from the 'service', maybe this is coliding with it.

Try creating your ingress pointing to another service, and check if this works :)

Waiting here :)

@slintes
Copy link

slintes commented May 29, 2017

Hey all, we came across a problem today, which has the same root cause, as far as we understand. Our problem is:

  1. we create a "normal" Ingress for a service
  2. we create another Ingress, for the same service, but with a "www." prefix on the host, which only does a redirect to the first Ingress (we use a configuration-snippet for that). We are using the same service as backend, even though it will never be called, because Ingresses don't work without a backend.
  3. the first Ingress has some additional annotations, e.g. session affinity configuration

This works... but as we noticed today, only sometimes, and when it works it can suddenly stop working: the session affinity configuration for the upstream is sometimes in the nginx config, sometimes not. This is what happens to our understanding:

  1. This is the place where the upstreams are created: https://github.com/kubernetes/ingress/blob/master/core/pkg/ingress/controller/controller.go#L774. The name of the upstream is namespace + service name + service port. When the upstream already exists, skip further configuration. Session affinity config is after this point.
  2. When our "normal" Ingress is processed first, everything is fine: upstream is created and configured as wanted. When the redirect Ingress is processed, it detects that the upstream already exists (same service -> same upstream name), and it skips upstream configuration.
  3. But: when the redirect Ingress is processed first, the upstream will be created for that Ingress, without session affinity config. When the normal lngress is processed, it will skip upstream configuration, because it already exists. This results in a wrong nginx configuration.

What we could do on our side, is to use a separate service for the redirect Ingress, so that it gets its own upstream. But that does sound like a bad workaround to me, because that will result in unused services and upstreams.

We think that it would make more sense, that the controller deals better with this:

  1. in general: Create a new upstream for every Ingress/rule/path, regardless of if the service was used already earlier, by using a more specific upstream name.
  2. for our special "redirect" usecase: allow Ingresses without backends

Kind regards, Marc

@rikatz
Copy link
Contributor

rikatz commented May 29, 2017

Yes @slintes You're right.

@aledbf This needs some kind of rewriting of ingress Core, as the proposal here is to have a upstream per ingress object.

It happens that when we've to ingresses controllers using the same backend, the first created ingress is the 'owner' of the upstream, and then other ingresses objects using the same upstream are not considered :)

I can take a look at this, but I'm pretty busy those days here at work.

Tks

@aledbf
Copy link
Member

aledbf commented May 29, 2017

as the proposal here is to have a upstream per ingress object.

I really don't want to introduce that change. Maybe we can duplicate the upstream definition only in those cases where session affinity or custom load balancing algorithm is configured.

@arjanschaaf
Copy link

So @jakexks, your attempt to get a sticky session example working should use a different service other then default-http-backend to work around the problem discussed in the comment above.

@rikatz
Copy link
Contributor

rikatz commented Jun 5, 2017

@aledbf @jakexks So can we close this issue, as the problem identified here is the usage of the same service from default-backend?

@aledbf by the way, is there a reason we don't change the upstream creation? Just curious :)

Thanks

@aledbf
Copy link
Member

aledbf commented Jun 5, 2017

by the way, is there a reason we don't change the upstream creation? Just curious :)

What do you mean?

@rikatz
Copy link
Contributor

rikatz commented Jun 5, 2017

About my suggestion of creating a upstream per ingress object.

@aledbf
Copy link
Member

aledbf commented Jun 5, 2017

About my suggestion of creating a upstream per ingress object.

I repeat my answer from a previous comment "Maybe we can duplicate the upstream definition only in those cases where session affinity or custom load balancing algorithm is configured"
This is in my TODO list :)

@rikatz
Copy link
Contributor

rikatz commented Jun 5, 2017

Right :) thanks. Let me know if need some help. I'm pretty busy here, but can try to achieve this in weekends :)

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants