Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Termination of TLS in Load Balancer with Helm chart config #8017

Closed
brsolomon-deloitte opened this issue Dec 6, 2021 · 40 comments
Closed

Termination of TLS in Load Balancer with Helm chart config #8017

brsolomon-deloitte opened this issue Dec 6, 2021 · 40 comments
Assignees
Labels
area/docs area/stabilization Work for increasing stabilization of the ingress-nginx codebase kind/bug Categorizes issue or PR as related to a bug. kind/documentation Categorizes issue or PR as related to documentation. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@brsolomon-deloitte
Copy link

brsolomon-deloitte commented Dec 6, 2021

https://kubernetes.github.io/ingress-nginx/deploy/#aws shows the direct application of a YAML manifest using kubectl apply -f. It is confusing / unspecified whether applying this file will function as a patch on existing resources that have been deployed with the Helm chart.

What is the suggested workflow for configuring termination of TLS in the Load Balancer, when the ingress-nginx Helm chart is used? What does an MCVE config set of values for Helm look like?

There are several scattered threads such as #1624 (comment) that allude to this, but #5374 and #5360 contain very little prescriptive guidance on the proper configuration with the ingress-nginx Helm chart.

Looking at https://github.com/raw/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/aws/deploy-tls-termination.yaml, it looks like that has been generated with something like helm template in the first place, so it would be useful to see what inputs resulted in that output.

@brsolomon-deloitte brsolomon-deloitte added the kind/bug Categorizes issue or PR as related to a bug. label Dec 6, 2021
@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority labels Dec 6, 2021
@brsolomon-deloitte
Copy link
Author

brsolomon-deloitte commented Dec 6, 2021

There is also no mention of the change to a port named tohttps on 2443, which seems like it must be specified in .Values.controller.containerPort.

@brsolomon-deloitte
Copy link
Author

Edit: Just a little scary to see it is actually done through a shell script

@longwuyuan
Copy link
Contributor

Recommend https://kubernetes.github.io/ingress-nginx/deploy/#tls-termination-in-aws-load-balancer-nlb

/remove-kind bug
/kind documentation
/area docs

@k8s-ci-robot k8s-ci-robot added kind/documentation Categorizes issue or PR as related to documentation. area/docs and removed kind/bug Categorizes issue or PR as related to a bug. labels Dec 7, 2021
@brsolomon-deloitte
Copy link
Author

@longwuyuan your reply includes the first link used in my original question, which is the very part of the documentation that this issue is about in the first place.

@longwuyuan
Copy link
Contributor

Need some clarification here.

These words

It is confusing / unspecified whether applying this file will function as a patch on existing resources that have been deployed with the Helm chart.

seem to imply, that there are some possibilities, of that yaml file, being a patch of sorts, over and above and after a helm based installation of the ingress-nginx controller ?

If I am reading it wrong, I apologise. But if otherwise, then just want to state the obvious that installing using helm and installing using the yaml file are independent alternative ways to install the ingress-nginx controller and there is no patching of resources.

You can run helm template against the chart and sort of get a view of the manifests for the resources involved in running the ingress-controller. And you can peruse the yaml file I linked also to get a view of the resources involved in running the ingress-controller.

Hope there is a solution to this issue soon.

@brsolomon-deloitte
Copy link
Author

brsolomon-deloitte commented Dec 7, 2021

seem to imply, that there are some possibilities, of that yaml file, being a patch of sorts, over and above and after a helm based installation of the ingress-nginx controller ?

Correct. So I now understand that it is not intended to function as an idempotent patch. Thank you for clarifying.

Considering that information, it seems like the suggested approach is not idiomatic and falls victim to several anti-patterns:

  • Using the already-templated deploy-tls-termination.yaml is incompatible with an existing Helm-based deployment
  • The docs suggest / imply that I should manually find/replace a couple of specific parameters in the template such as "XX.XX.XX.XX/XX". This is not the programatic way that I would hope to use Kubernetes/Helm
  • The docs leave out mention entirely of what Helm chart parameters would be required to achieve this same effect - namely, those found halfway down a shell script that itself is not mentioned at all in the docs
  • Last but not least, that script contains the following comment: "legacy in-tree service load balancer controller for AWS NLB that has been phased out from Kubernetes mainline". It's unclear what this means. What is an "in-tree service"? Is this approach considered deprecated and should I prefer the AWS Load Balancer Controller instead?

@longwuyuan
Copy link
Contributor

longwuyuan commented Dec 7, 2021 via email

@gijsdpg
Copy link

gijsdpg commented Dec 14, 2021

I'm also confused by the structure of the manual. I'm also trying to set up TLS termination in AWS load balancer, but currently, the documentation seems to read as if the only option is downloading and editing a YAML file.

@gijsdpg
Copy link

gijsdpg commented Dec 14, 2021

Ok, I think I figured it out. I can replicate the deploy-tls-termination.yaml configuration with helm with the following procedure:

$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ helm repo update
$ helm template -n ingress-nginx --create-namespace ingress-nginx -f values.yaml ingress-nginx/ingress-nginx manifest.yaml

where values.yaml is a file containing:

controller:
  service:
    externalTrafficPolicy: "Local"
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60'
      service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
      service.beta.kubernetes.io/aws-load-balancer-type: nlb
  config:
    http-snippet: |
      server{
        listen 2443;
        return 308 https://$host$request_uri;
      }
    proxy-real-ip-cidr: XXX.XXX.XXX/XX
    use-forwarded-headers: 'true'
    type: "LoadBalancer"
    ports:
      http:  80
      https: 443

    targetPorts:
      http: tohttps
      https: http

you can use the values file to set your appropriate values.

@brsolomon-deloitte
Copy link
Author

brsolomon-deloitte commented Dec 14, 2021

@gijsdpg if you check out the shell script that is used to generate deploy-tls-termination.yaml, you will see a few very slight differences:

controller:
  service:
    type: LoadBalancer
    externalTrafficPolicy: Local
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX"
      service.beta.kubernetes.io/aws-load-balancer-type: nlb
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
    targetPorts:
      http: tohttps
      https: http
  # Configures the ports the nginx-controller listens on
  containerPort:
    http: 80
    https: 80
    tohttps: 2443
  config:
    proxy-real-ip-cidr: XXX.XXX.XXX/XX
    use-forwarded-headers: "true"
    http-snippet: |
      server {
        listen 2443;
        return 308 https://\$host\$request_uri;
      }

(Note the backslashes are only there because they are used in a Bash heredoc and should go away if you are editing YAML directly.)

What threw us off is the need to use a NLB rather than ALB for this.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 10, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 10, 2022
@mtb-xt
Copy link

mtb-xt commented May 12, 2022

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 12, 2022
@dwelch2344
Copy link
Contributor

Ok, I think I figured it out. I can replicate the deploy-tls-termination.yaml configuration with helm with the following procedure:

$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ helm repo update
$ helm template -n ingress-nginx --create-namespace ingress-nginx -f values.yaml ingress-nginx/ingress-nginx manifest.yaml

where values.yaml is a file containing:

controller:
  service:
    externalTrafficPolicy: "Local"
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60'
      service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
      service.beta.kubernetes.io/aws-load-balancer-type: nlb
  config:
    http-snippet: |
      server{
        listen 2443;
        return 308 https://$host$request_uri;
      }
    proxy-real-ip-cidr: XXX.XXX.XXX/XX
    use-forwarded-headers: 'true'
    type: "LoadBalancer"
    ports:
      http:  80
      https: 443

    targetPorts:
      http: tohttps
      https: http

you can use the values file to set your appropriate values.

THANK YOU! I've been wrestling this all night (complicated by the fact we needed ARM64 support, which slightly breaks the manual install methods) and this finally got us there :)

@alanlekah
Copy link

alanlekah commented Aug 24, 2022

I don't understand why a working version of SSL termination with helm was removed from the README during the migration from the old repo. This is the only thing that I got working:

https://github.com/helm/charts/blob/master/stable/nginx-ingress/README.md#aws-l4-nlb-with-ssl-redirection

  config:
    ssl-redirect: "false" # we use `special` port to control ssl redirection
    server-snippet: |
      listen 8000;
      if ( $server_port = 80 ) {
         return 308 https://$host$request_uri;
      }
  containerPort:
    http: 80
    https: 443
    special: 8000
  service:
    targetPorts:
      http: http
      https: special
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "your-arn"
      service.beta.kubernetes.io/aws-load-balancer-type: "nlb"

and installed with:
helm upgrade --install -n ingress-nginx --create-namespace ingress-nginx -f values.yaml ingress-nginx/ingress-nginx

The current docs with L7 and SSL termination did not work, only the above. With the L7 solution in the current README, I was only led to more 308 redirects. Hope this helps someone else that got stuck.

@longwuyuan
Copy link
Contributor

  • I think we were/are convinced that the changed deployment docs were good

  • That change happened months ago so this is the first report in months related to AWS deployment docs

  • Can you confirm that the second section of this AWS deployment docs https://kubernetes.github.io/ingress-nginx/deploy/#aws , that relates to terminating TLS on the NLB, instead of terminating on the controller, does not work as documented ?

  • If you are specifically reporting that the helm chart installation does not work for termination of TLS on NLB, then it seems obvious to almost everyone that the volume of customization involved is just too much insanity, when considering the number of providers, multiplied by the number of unique annotations or configurations possible.

  • Additionally, in gitops circles, you will find raw yaml manifest adheres to single-source-of-truth and that is a best practice preferred when compared with having helm versions in addition to git versions

  • There have been several users of helm reporting similar needs for other providers like Digital-Ocean as well. So if you want you can submit a PR to add a section to the Deployment docs for helm usage with specific providers. But there is lack of resources and such docs (and even code) does not get maintenance attention, that is due to it

@alanlekah
Copy link

  • Right, I'm confirming that I tried the installation doc above in AWS (https://kubernetes.github.io/ingress-nginx/deploy/#aws) with no luck. I was stuck in the repetitive 308 redirect cycle that was reported before and closed.

  • I completely understand there's way too much customization based on providers.

  • I wouldn't have used the helm if the raw yaml manifest worked as-is but that old README was the only way I could make this work in my configuration even after trying every change recommended here (as well as the deploy.yml in the article above without any changes other than the cert ARN and VPC CIDR of my EKS cluster):
    Http -> https redirect on TCP ELB terminating ssl, results in a 308 redirect loop. #2724

  • I would happily create a PR for the documentation above if you think that would make sense here

@longwuyuan
Copy link
Contributor

@alanlekah now someone has to test it and so involves aws account and don't know if free tier works etc. But what is critical now is ;

  • The project can not be publishing a broken AWS install procedure
  • Based on above since you have graciously offered to help fix the current doc with a PR, please go ahead with that
  • But the PR must fix the static yaml manifest first and adding a alternative method using helm is left to you as a option
  • Please check the /hack/generate-deploy-scripts.sh and more importantly the values file at /hack/manifest-templates/provider/aws/nlb-with-tls-termination/values.yaml` in the project
  • The script uses the values file to generate provider specific manifests

Once again thank you very much for reporting this and thank you very much for helping fix the doc

I will take your word and label the issue as well as your PR accordingly. But I still don't understand, what precisely is broken in the current NLB Termination doc. If you could describe that in a way that any reader can understand, it will help the much needed stabilization work in progress, for the project

/triage accepted
/area stabilization
/area docs
/priority important-soon
/kind bug
/assign @alanlekah

@k8s-ci-robot k8s-ci-robot added the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Aug 25, 2022
@k8s-ci-robot k8s-ci-robot added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. kind/bug Categorizes issue or PR as related to a bug. and removed needs-priority labels Aug 25, 2022
@longwuyuan
Copy link
Contributor

@tao12345666333 @rikatz @strongjz this latest comment by @alanlekah says our deployment doc for AWS is broken. Do we just use our own personal accounts to test on AWS ?

@alanlekah
Copy link

alanlekah commented Aug 25, 2022

@longwuyuan I'm not suggesting its the deployment doc thats broken. I may be suggesting that the deploy.yml at:

https://github.com/raw/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml

may still have issues. I'm not terribly familiar with what the difference is between the helm chart that generates the working raw yaml and the file at the link above but here's what I noticed and what worked for me:

  • I didn't use the proxy-real-ip-cidr as suggested, I got rid of it entirely
  • I replaced the following http-snippet:
http-snippet: |
    server {
      listen 2443;
      return 308 https://$host$request_uri;
    }

with this one:

server-snippet: "listen 8000;\nif ( $server_port = 80 ) {\n   return 308 https://$host$request_uri;\n}\n"

effectively adding an if condition and changing the port to listen on from 2443 to 8000

  • Added in the same data section (as the http-snippet) the disabling of ssl redirects:
ssl-redirect: "false"
  • Helm doesn't use externalTrafficPolicy: Local like the deploy yaml above does. Cluster / the default value seems to work for me

I generated the working yaml with the following helm template command:
helm template -n ingress-nginx --create-namespace ingress-nginx -f values.yaml ingress-nginx/ingress-nginx

The problem:

I created a simple API with a helm deployment listening on port 5000. The helm chart created an ingress with the nginx annotations and a ClusterIP service (also on port 5000 for each). When using the installation guide AS-IS, I get a 308 permanent redirect error in postman.

When using the values.yaml and the helm template using the values.yaml mentioned in the comment above (#8017 (comment)), I had no issues accessing the endpoint in HTTPS and HTTP redirect was working correctly as well. I have no idea why.

I'm going to test each of those changes in the file listed in the main yml in this repo and see which one actually fixes the issue.

@rikatz
Copy link
Contributor

rikatz commented Aug 25, 2022

I know this may be cliche, but I don't have access or ideas on how AWS works. Can someone jump into our next call to show the problem? Maybe we can discuss what can be done there :)

@alanlekah
Copy link

I know this may be cliche, but I don't have access or ideas on how AWS works. Can someone jump into our next call to show the problem? Maybe we can discuss what can be done there :)

Frankly I've tested all the combinations of changes that i've made from this values.yaml applied against the stock deploy.yml and don't know where the issue is in the file from the repo. I just know that what helm generates works.

@alanlekah
Copy link

alanlekah commented Aug 25, 2022

Attaching the working yml for comparison ( against the non-working one )

deploy.yml.zip

@longwuyuan
Copy link
Contributor

With the latest updates from @alanlekah I think we some idea of the next action item. We need to diff the yaml that works for him and the manifest we publish. Maybe we paid attention to the vanilla aws manifest and messed up the nlb-termination one, as it is in a subdir.

@alanlekah that descriptive update from you was helpful as as I was typing this, I saw you posted the manifest you used. thanks. Please just clarify this. You said you did not use the proxy-real-ip-cidr and you also said you replaced that snippet. I want to confirm that first you generated a yaml with helm template and then you did these actions of deleting proxy-real-ip-cidr annotations & modifying snippet etc. in that file you generated.

@alanlekah
Copy link

Something seems very off. For example: https://github.com/kubernetes/ingress-nginx/blob/main/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml#L473

See this section in controller-v1.3.0/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml:

        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 80
          name: https
          protocol: TCP

Looks like that was fixed in the parent one but not in the TLS termination yaml?

To create the working yaml, I only did the following (I did not replace or add anything). I created the values.yaml:

controller:
  config:
    ssl-redirect: "false" # we use `special` port to control ssl redirection
    server-snippet: |
      listen 8000;
      if ( $server_port = 80 ) {
         return 308 https://$host$request_uri;
      }
  containerPort:
    http: 80
    https: 443
    special: 8000
  service:
    targetPorts:
      http: http
      https: special
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<arn>"
      service.beta.kubernetes.io/aws-load-balancer-type: "nlb"

helm upgrade --install -n ingress-nginx --create-namespace ingress-nginx -f values.yaml ingress-nginx/ingress-nginx

That comparison was just a rough diff I did on the files.

@longwuyuan
Copy link
Contributor

i was about to post the diff so thanks, that is a great catch.

This confirms that our generate-deploy-script needs improvement. I suspect the root cause is the subdir and the script traverses parent dirs and not subdirs.

We will fix it. Thanks again for reporting this and if you find other differences, please let us know.

@Volatus when you get a chance, can you please ping me on slack about this. I think we missed traversing the subdir in generate-deploy-script because aws is a unique case in the sense of having a subdir. We can talk about stuff like making 2 providers of aws or adding a special if condition for aws provider etc.

@longwuyuan
Copy link
Contributor

@alanlekah Does port 8443 ring any bells ?

I don't recall seeing port 8000 before but must have missed things so just asking. We will check anyways.

@alanlekah
Copy link

alanlekah commented Aug 25, 2022

@longwuyuan Yes it does. Port 8000 is something I changed (from 2443). I'm not sure if it makes any difference though.

See the old docs for nginx.. https://github.com/helm/charts/tree/master/stable/nginx-ingress#aws-l4-nlb-with-ssl-redirection

8443 is the webhook one AFAIK. I believe the special: 8000 is the same as the current tohttps: 2443 that's in the repo and serves the same purpose.

@longwuyuan
Copy link
Contributor

ok. thanks. once again good pointer on the old doc. the generate-deploy-script went through 3-4 iterations so using that history will be a rabbit hole. I think we are better off basing our fix on the old doc for sanity sake.

@alanlekah
Copy link

Something interesting is that turning the helm values.yml file into this also works without issue, so it doesn't make a difference on the name or port number (seems like):

controller:
  config:
    ssl-redirect: "false" # we use `special` port to control ssl redirection
    server-snippet: |
      listen 2443;
      if ( $server_port = 80 ) {
         return 308 https://$host$request_uri;
      }
  containerPort:
    http: 80
    https: 443
    tohttps: 2443
  service:
    targetPorts:
      http: http
      https: tohttps
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<arn>"
      service.beta.kubernetes.io/aws-load-balancer-type: "nlb"

@longwuyuan
Copy link
Contributor

that is pretty useful info. saves testing that aspect at least. thank you very much. the task now is to instrument that automated generation of that manifest with these good configs we are learning here.

@alanlekah
Copy link

alanlekah commented Aug 25, 2022

Yep. That tohttps is basically already what you have in the parent one. Interestingly, if you change this:

server-snippet: |
      listen 2443;
      if ( $server_port = 80 ) {
         return 308 https://$host$request_uri;
      }

to:

server-snippet: |
      listen 2443;
      return 308 https://$host$request_uri;

the 308 permanent redirect issue returns. Hopefully that helps narrow it down even more.

Edit: The issue being: you get stuck in a 308 loop when calling a service thats using nginx: Exceeded maxRedirects. Probably stuck in a redirect loop

@longwuyuan
Copy link
Contributor

longwuyuan commented Aug 25, 2022

got it. can't do 80 so instrumented another port but 308 should only be sent one time, the first time (or something like that)

@alanlekah
Copy link

I believe it has to be 80 since its an issue with forwarding HTTP to HTTPS traffic. I was under the impression thats why the port 80 check was one part of the fix.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 23, 2022
@StepanKuksenko
Copy link

@alanlekah could you explain please why we can't use just this ?

if ( $server_port = 80 ) {
   return 308 https://$host$request_uri;
}

why we even need 2443 port ?
also why we need this

service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"

instead of

service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"

@alanlekah
Copy link

alanlekah commented Nov 30, 2022

@StepanKuksenko I'm not sure why nginx would use the 2443 port in the deploy.yml from here which is part of the install doc for AWS NLB installation here

From master, this doc has the correct working settings.

I tried the following:

service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"

without issues as well.

@k8s-triage-robot
Copy link

This issue is labeled with priority/important-soon but has not been updated in over 90 days, and should be re-triaged.
Important-soon issues must be staffed and worked on either currently, or very soon, ideally in time for the next release.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Deprioritize it with /priority important-longterm or /priority backlog
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. and removed triage/accepted Indicates an issue or PR is ready to be actively worked on. labels Feb 28, 2023
@longwuyuan
Copy link
Contributor

From reports on other AWS related issues, it seems that the only change that is not documented in this project is the use of the new AWS-LoadBalancer-Controller being a requirement.

While it will be of interest to mention that in the deployment docs of this project explicitly, its also assumed that the users of AWS know that the requirement of installating the AWS-LoadBalancer-Controller is a pre-requisite for creating NLBs.

SInce there is no traction on this issue and there is a permanent-open-issue about improvements possible to docs, I will close this issue for now. If the original reporter of this issue wants to discuss a docs PR or finds and posts data related to this issue that lead to a problem to be solved in the controller, please re-open the issue. There are too many inactive issues open and hence cluttering the project management.

/close

@k8s-ci-robot
Copy link
Contributor

@longwuyuan: Closing this issue.

In response to this:

From reports on other AWS related issues, it seems that the only change that is not documented in this project is the use of the new AWS-LoadBalancer-Controller being a requirement.

While it will be of interest to mention that in the deployment docs of this project explicitly, its also assumed that the users of AWS know that the requirement of installating the AWS-LoadBalancer-Controller is a pre-requisite for creating NLBs.

SInce there is no traction on this issue and there is a permanent-open-issue about improvements possible to docs, I will close this issue for now. If the original reporter of this issue wants to discuss a docs PR or finds and posts data related to this issue that lead to a problem to be solved in the controller, please re-open the issue. There are too many inactive issues open and hence cluttering the project management.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/docs area/stabilization Work for increasing stabilization of the ingress-nginx codebase kind/bug Categorizes issue or PR as related to a bug. kind/documentation Categorizes issue or PR as related to documentation. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

10 participants