Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example Service Entry load balancing issue and mTLS connection #124

Closed
iandyh opened this issue Jul 21, 2020 · 8 comments
Closed

Example Service Entry load balancing issue and mTLS connection #124

iandyh opened this issue Jul 21, 2020 · 8 comments
Labels
bug Something isn't working

Comments

@iandyh
Copy link

iandyh commented Jul 21, 2020

Describe the bug
This is not a bug for usage of admiral per se. I am following the docs: https://istio.io/latest/blog/2020/multi-cluster-mesh-automation/ to understand the idea behind Admiral but encountered the following issue:

  1. After creating the service entry:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: se-test
  namespace: caas-sentinel
spec:
  endpoints:
  - address: sample-app.caas-sentinel.svc.cluster.local
    locality: jpe1/jpe1b
    ports:
      http: 80
  hosts:
  - productpage.global
  location: MESH_INTERNAL
  ports:
  - name: http
    number: 80
    protocol: http
  addresses:
  - 240.0.0.10
  resolution: DNS

the Envoy configuration does not look correct to me:

         "lb_endpoints": [
          {
           "endpoint": {
            "address": {
             "socket_address": {
              "address": "sample-app.caas-sentinel.svc.cluster.local",
              "port_value": 80
             }
            }
           },
           "load_balancing_weight": 1
          }
         ],
         "load_balancing_weight": 1
        }

After I changed to using STATIC and actual pod IP, the configuration looks correct. sidecar proxy will do the direct pod load balancing. I am not sure whether this is a bug(probably by Istio) or by design. But it will be great if someone can help to confirm.

Second issue is, with the same service entry above, the mTLS connection to sample-app.caas-sentinel does not work. I got the upstream connect error or disconnect/reset before headers. reset reason: connection termination error.

Steps To Reproduce
Istio 1.6
Create above service entry
Turn on target remote service mTLS as shown here: https://istio.io/latest/docs/tasks/security/authentication/authn-policy/

Expected behavior
sidecar proxy should do the pod load balancing instead of calling the service FQDN directly.
mTLS should work with local service.

Thanks a lot for your help!

@iandyh iandyh added the bug Something isn't working label Jul 21, 2020
@iandyh iandyh changed the title In Example Service Entry load balancing issue and mTLS connection Jul 21, 2020
@aattuluri
Copy link
Contributor

@iandyh
i) above is a known limitation in Istio, I just raised a feature request.
istio/istio#25898

For ii) did you make a http request to sample-app.caas-sentinel? Try using the full name to see if that works, sample-app.caas-sentinel.svc.cluster.local

@iandyh
Copy link
Author

iandyh commented Jul 28, 2020

@aattuluri Thanks for the reply!

i) Will keep an eye at that issue.
ii) If I directly call the remote service with k8s FQDN, mTLS will work. But if I use the productpage.global it will fail. I guess SNI verification fails here.

@aattuluri
Copy link
Contributor

@iandyh
You can configure subjectAltNames in your service entry if you are sure that its SAN verification thats failing

@iandyh
Copy link
Author

iandyh commented Jul 31, 2020

@aattuluri After adding

  subjectAltNames:
  - sample-app.caas-sentinel.svc.cluster.local

It does not help...

@aattuluri
Copy link
Contributor

@iandyh Can you add this to your deployment -> spec -> template -> annotations, make some requests and share the logs?

sidecar.istio.io/logger: debug

@aattuluri
Copy link
Contributor

I am not sure if its a SAN verification failure, debug logging might help identity the actual issue.

@iandyh
Copy link
Author

iandyh commented Aug 3, 2020

@aattuluri Looking at the logs, it only says 503 with UC flag(Upstream connection termination).

[2020-08-03T06:27:08.472Z] "GET / HTTP/1.1" 503 UC "-" "-" 0 95 1 - "-" "curl/7.70.0-DEV" "10845a32-7cfd-911d-b0c0-aab47eec9238" "productpage.global" "100.102.139.207:80" outbound|80||productpage.global 100.86.65.151:40152 100.99.126.31:80 100.86.65.151:38286 - default

@aattuluri
Copy link
Contributor

This is not an admiral bug, closing this issue.

itsLucario pushed a commit to itsLucario/admiral that referenced this issue Aug 9, 2022
…osystem#124)

* Prometheus Counters

- add counter. rework tests
- add support for labels
- move metrics to metrics.go
- update metrics tests with labels
- use a dedicated delegator to capture metrics
- renamed to MonitoredDelegator

Signed-off-by: Adil Fulara <adil.fulara@gmail.com>
Signed-off-by: Adil Fulara <adil_fulara@intuit.com>

Co-authored-by: Adil Fulara <adil.fulara@gmail.com>
itsLucario pushed a commit to itsLucario/admiral that referenced this issue Aug 9, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants