Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Question: How to cluster load balancing work #102

Closed
kenakamu opened this issue May 15, 2020 · 23 comments
Closed

[BUG] Question: How to cluster load balancing work #102

kenakamu opened this issue May 15, 2020 · 23 comments
Labels
bug Something isn't working
Milestone

Comments

@kenakamu
Copy link

Describe the bug
I follow multi cluster setup sample to setup my AKS cluster

Steps To Reproduce
Simply follow the steps in example. I use istio 1.15.1 and k8s 1.15.10. admiral 0.9

Expected behavior
After complete the step, I get "Hello World! - Admiral!!" and greeting from remote.

Actual behavior
It always comes back from local so I only see "Hello World! - Admiral!!". I also added GTR but it still same.

ServiceEntry is created as expected with correct values.
How do I troubleshoot?

@kenakamu kenakamu added the bug Something isn't working label May 15, 2020
@kenakamu
Copy link
Author

Additional Info. I checked DestinationRule and I realize there is no loadbalancing rule added even thought I create GTP

@kenakamu
Copy link
Author

update. When i create GTP like below it seems working

spec:
  policy:
    - dns: default.myapp.global
      lbType: 1 #0 represents TOPOLOGY, 1 represents FAILOVER
      target:
        - region: eastus/*
          weight: 10
        - region: westus/*
          weight: 90

The original sample didn't have /* as part of region

@kenakamu
Copy link
Author

I am not sure if admiral respect gtp as i see sometimes it seems work but it goes back to 50:50 at the end. I still dont see it updates DR. If you have any advice how to troubleshoot or debug, really appreciated

@kenakamu kenakamu reopened this May 15, 2020
@aattuluri
Copy link
Contributor

@kenakamu Can you show the GTP you created and the ServiceEntry/DestinationRule getting generated?

Speaking of no load balancing happening, do the nodes on your cluster have region labels? Can you show the output of the following command from both clusters:
kubectl get nodes --show-labels

@kenakamu
Copy link
Author

I have 3 AKS cluster. eastus (eastcluser), westus (westcluster) and cetralus (admiral)
I installed istio on east and west, then make centralus as admiral control plane.

As I dont have istio install on central, I install both remotecluster.yaml and demosinglecluster.yaml on three clusters. Followings are labels of nodes

centralus

>kubectl get nodes --show-labels --context admiral
NAME                                STATUS   ROLES   AGE     VERSION    LABELS
aks-agentpool-11930115-vmss000000   Ready    agent   3d12h   v1.15.10   agentpool=agentpool,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=Standard_DS2_v2,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=centralus,failure-domain.beta.kubernetes.io/zone=0,kubernetes.azure.com/cluster=MC_AdmiralTest_admiral_centralus,kubernetes.azure.com/mode=system,kubernetes.azure.com/role=agent,kubernetes.io/arch=amd64,kubernetes.io/hostname=aks-agentpool-11930115-vmss000000,kubernetes.io/os=linux,kubernetes.io/role=agent,node-role.kubernetes.io/agent=,storageprofile=managed,storagetier=Premium_LRS

eastus

>kubectl get nodes --show-labels --context eastcluster
NAME                                STATUS   ROLES   AGE     VERSION    LABELS
aks-agentpool-17193733-vmss000000   Ready    agent   3d12h   v1.15.10   agentpool=agentpool,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=Standard_DS2_v2,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eastus,failure-domain.beta.kubernetes.io/zone=0,kubernetes.azure.com/cluster=MC_AdmiralTest_eastcluster_eastus,kubernetes.azure.com/mode=system,kubernetes.azure.com/role=agent,kubernetes.io/arch=amd64,kubernetes.io/hostname=aks-agentpool-17193733-vmss000000,kubernetes.io/os=linux,kubernetes.io/role=agent,node-role.kubernetes.io/agent=,storageprofile=managed,storagetier=Premium_LRS
aks-agentpool-17193733-vmss000002   Ready    agent   9h      v1.15.10   agentpool=agentpool,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=Standard_DS2_v2,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eastus,failure-domain.beta.kubernetes.io/zone=1,kubernetes.azure.com/cluster=MC_AdmiralTest_eastcluster_eastus,kubernetes.azure.com/mode=system,kubernetes.azure.com/role=agent,kubernetes.io/arch=amd64,kubernetes.io/hostname=aks-agentpool-17193733-vmss000002,kubernetes.io/os=linux,kubernetes.io/role=agent,node-role.kubernetes.io/agent=,storageprofile=managed,storagetier=Premium_LRS

westus

>kubectl get nodes --show-labels --context westcluster
NAME                                STATUS   ROLES   AGE     VERSION    LABELS
aks-agentpool-14499801-vmss000000   Ready    agent   3d12h   v1.15.10   agentpool=agentpool,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=Standard_DS2_v2,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=westus,failure-domain.beta.kubernetes.io/zone=0,kubernetes.azure.com/cluster=MC_AdmiralTest_westcluster_westus,kubernetes.azure.com/mode=system,kubernetes.azure.com/role=agent,kubernetes.io/arch=amd64,kubernetes.io/hostname=aks-agentpool-14499801-vmss000000,kubernetes.io/os=linux,kubernetes.io/role=agent,node-role.kubernetes.io/agent=,storageprofile=managed,storagetier=Premium_LRS
aks-agentpool-14499801-vmss000002   Ready    agent   9h      v1.15.10   agentpool=agentpool,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=Standard_DS2_v2,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=westus,failure-domain.beta.kubernetes.io/zone=1,kubernetes.azure.com/cluster=MC_AdmiralTest_westcluster_westus,kubernetes.azure.com/mode=system,kubernetes.azure.com/role=agent,kubernetes.io/arch=amd64,kubernetes.io/hostname=aks-agentpool-14499801-vmss000002,kubernetes.io/os=linux,kubernetes.io/role=agent,node-role.kubernetes.io/agent=,storageprofile=managed,storagetier=Premium_LRS

The created objects are as follows.
On eastcluster.
Service Entiry

> k get se -n admiral-sync -o yaml --context=eastcluster
apiVersion: v1
items:
- apiVersion: networking.istio.io/v1beta1
  kind: ServiceEntry
  metadata:
    creationTimestamp: "2020-05-17T06:58:53Z"
    generation: 8
    labels:
      identity: greeting
    name: default.greeting.global-se
    namespace: admiral-sync
    resourceVersion: "386629"
    selfLink: /apis/networking.istio.io/v1beta1/namespaces/admiral-sync/serviceentries/default.greeting.global-se
    uid: c85a3a27-0a7b-4d82-b7cb-ca2db415dc94
  spec:
    addresses:
    - 240.0.10.1
    endpoints:
    - address: greeting.sample.svc.cluster.local
      locality: eastus
      ports:
        http: 80
    - address: 13.87.226.150
      locality: westus
      ports:
        http: 15443
    hosts:
    - default.greeting.global
    location: MESH_INTERNAL
    ports:
    - name: http
      number: 80
      protocol: http
    resolution: DNS
- apiVersion: networking.istio.io/v1beta1
  kind: ServiceEntry
  metadata:
    creationTimestamp: "2020-05-17T06:59:05Z"
    generation: 8
    labels:
      identity: webapp
    name: default.webapp.global-se
    namespace: admiral-sync
    resourceVersion: "386494"
    selfLink: /apis/networking.istio.io/v1beta1/namespaces/admiral-sync/serviceentries/default.webapp.global-se
    uid: 6c0eeb27-10c1-433e-ad79-eea0a3b900fb
  spec:
    addresses:
    - 240.0.10.2
    endpoints:
    - address: webapp.sample.svc.cluster.local
      locality: eastus
      ports:
        http: 80
    - address: 13.87.226.150
      locality: westus
      ports:
        http: 15443
    hosts:
    - default.webapp.global
    location: MESH_INTERNAL
    ports:
    - name: http
      number: 80
      protocol: http
    resolution: DNS
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Virtual Service

> k get vs -n admiral-sync -o yaml --context=eastcluster
apiVersion: v1
items:
- apiVersion: networking.istio.io/v1beta1
  kind: VirtualService
  metadata:
    creationTimestamp: "2020-05-17T06:58:53Z"
    generation: 1
    name: default.greeting.global-default-vs
    namespace: admiral-sync
    resourceVersion: "357370"
    selfLink: /apis/networking.istio.io/v1beta1/namespaces/admiral-sync/virtualservices/default.greeting.global-default-vs
    uid: 14fecdaf-dd62-42e5-b83b-36e2860e3232
  spec:
    exportTo:
    - '*'
    gateways:
    - istio-multicluster-ingressgateway
    hosts:
    - default.greeting.global
    http:
    - route:
      - destination:
          host: greeting.sample.svc.cluster.local
          port:
            number: 80
- apiVersion: networking.istio.io/v1beta1
  kind: VirtualService
  metadata:
    creationTimestamp: "2020-05-17T06:59:05Z"
    generation: 1
    name: default.webapp.global-default-vs
    namespace: admiral-sync
    resourceVersion: "357419"
    selfLink: /apis/networking.istio.io/v1beta1/namespaces/admiral-sync/virtualservices/default.webapp.global-default-vs
    uid: 6ff57251-38d0-4f3a-8be8-39f240382c02
  spec:
    exportTo:
    - '*'
    gateways:
    - istio-multicluster-ingressgateway
    hosts:
    - default.webapp.global
    http:
    - route:
      - destination:
          host: webapp.sample.svc.cluster.local
          port:
            number: 80
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Destination Rule

> k get dr -n admiral-sync -o yaml --context=eastcluster
apiVersion: v1
items:
- apiVersion: networking.istio.io/v1beta1
  kind: DestinationRule
  metadata:
    creationTimestamp: "2020-05-17T06:58:53Z"
    generation: 1
    name: default.greeting.global-default-dr
    namespace: admiral-sync
    resourceVersion: "357369"
    selfLink: /apis/networking.istio.io/v1beta1/namespaces/admiral-sync/destinationrules/default.greeting.global-default-dr
    uid: 63a6abc1-5788-4e21-9655-cf1964152d8f
  spec:
    host: default.greeting.global
    trafficPolicy:
      outlierDetection:
        baseEjectionTime: 120s
        consecutive5xxErrors: 10
        interval: 5s
      tls:
        mode: ISTIO_MUTUAL
- apiVersion: networking.istio.io/v1beta1
  kind: DestinationRule
  metadata:
    creationTimestamp: "2020-05-17T06:59:05Z"
    generation: 1
    name: default.webapp.global-default-dr
    namespace: admiral-sync
    resourceVersion: "357418"
    selfLink: /apis/networking.istio.io/v1beta1/namespaces/admiral-sync/destinationrules/default.webapp.global-default-dr
    uid: a39aee96-1576-4b79-b942-36a3188bf78c
  spec:
    host: default.webapp.global
    trafficPolicy:
      outlierDetection:
        baseEjectionTime: 120s
        consecutive5xxErrors: 10
        interval: 5s
      tls:
        mode: ISTIO_MUTUAL
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Following are on westcluster
Service Enttry

> k get se -n admiral-sync -o yaml --context=westcluster
apiVersion: v1
items:
- apiVersion: networking.istio.io/v1beta1
  kind: ServiceEntry
  metadata:
    creationTimestamp: "2020-05-17T13:35:51Z"
    generation: 7
    labels:
      identity: greeting
    name: default.greeting.global-se
    namespace: admiral-sync
    resourceVersion: "384620"
    selfLink: /apis/networking.istio.io/v1beta1/namespaces/admiral-sync/serviceentries/default.greeting.global-se
    uid: e0377674-2a41-453f-bb2c-e5dee46d467d
  spec:
    addresses:
    - 240.0.10.1
    endpoints:
    - address: 52.191.83.42
      locality: eastus
      ports:
        http: 15443
    - address: greeting.sample.svc.cluster.local
      locality: westus
      ports:
        http: 80
    hosts:
    - default.greeting.global
    location: MESH_INTERNAL
    ports:
    - name: http
      number: 80
      protocol: http
    resolution: DNS
- apiVersion: networking.istio.io/v1beta1
  kind: ServiceEntry
  metadata:
    creationTimestamp: "2020-05-17T13:36:26Z"
    generation: 7
    labels:
      identity: webapp
    name: default.webapp.global-se
    namespace: admiral-sync
    resourceVersion: "384485"
    selfLink: /apis/networking.istio.io/v1beta1/namespaces/admiral-sync/serviceentries/default.webapp.global-se
    uid: 6901d630-c56d-477b-a3b7-8fa483f799f1
  spec:
    addresses:
    - 240.0.10.2
    endpoints:
    - address: 52.191.83.42
      locality: eastus
      ports:
        http: 15443
    - address: webapp.sample.svc.cluster.local
      locality: westus
      ports:
        http: 80
    hosts:
    - default.webapp.global
    location: MESH_INTERNAL
    ports:
    - name: http
      number: 80
      protocol: http
    resolution: DNS
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Virtual Service

> k get vs -n admiral-sync -o yaml --context=westcluster
apiVersion: v1
items:
- apiVersion: networking.istio.io/v1beta1
  kind: VirtualService
  metadata:
    creationTimestamp: "2020-05-17T13:35:51Z"
    generation: 1
    name: default.greeting.global-default-vs
    namespace: admiral-sync
    resourceVersion: "382773"
    selfLink: /apis/networking.istio.io/v1beta1/namespaces/admiral-sync/virtualservices/default.greeting.global-default-vs
    uid: 94bb1df6-e1ac-4c05-a2a7-d85b7ba4b75a
  spec:
    exportTo:
    - '*'
    gateways:
    - istio-multicluster-ingressgateway
    hosts:
    - default.greeting.global
    http:
    - route:
      - destination:
          host: greeting.sample.svc.cluster.local
          port:
            number: 80
- apiVersion: networking.istio.io/v1beta1
  kind: VirtualService
  metadata:
    creationTimestamp: "2020-05-17T13:36:26Z"
    generation: 1
    name: default.webapp.global-default-vs
    namespace: admiral-sync
    resourceVersion: "382867"
    selfLink: /apis/networking.istio.io/v1beta1/namespaces/admiral-sync/virtualservices/default.webapp.global-default-vs
    uid: 9d8db553-5f29-4302-8f99-4402fdf6df2a
  spec:
    exportTo:
    - '*'
    gateways:
    - istio-multicluster-ingressgateway
    hosts:
    - default.webapp.global
    http:
    - route:
      - destination:
          host: webapp.sample.svc.cluster.local
          port:
            number: 80
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Destination Rule

> k get dr -n admiral-sync -o yaml --context=westcluster
apiVersion: v1
items:
- apiVersion: networking.istio.io/v1beta1
  kind: DestinationRule
  metadata:
    creationTimestamp: "2020-05-17T13:35:51Z"
    generation: 1
    name: default.greeting.global-default-dr
    namespace: admiral-sync
    resourceVersion: "382772"
    selfLink: /apis/networking.istio.io/v1beta1/namespaces/admiral-sync/destinationrules/default.greeting.global-default-dr
    uid: 4d4e698b-d52a-4134-ab8d-8fa1423db22c
  spec:
    host: default.greeting.global
    trafficPolicy:
      outlierDetection:
        baseEjectionTime: 120s
        consecutive5xxErrors: 10
        interval: 5s
      tls:
        mode: ISTIO_MUTUAL
- apiVersion: networking.istio.io/v1beta1
  kind: DestinationRule
  metadata:
    creationTimestamp: "2020-05-17T13:36:26Z"
    generation: 1
    name: default.webapp.global-default-dr
    namespace: admiral-sync
    resourceVersion: "382865"
    selfLink: /apis/networking.istio.io/v1beta1/namespaces/admiral-sync/destinationrules/default.webapp.global-default-dr
    uid: 9857e0a8-dd0a-4932-8f94-eae24d7c9f56
  spec:
    host: default.webapp.global
    trafficPolicy:
      outlierDetection:
        baseEjectionTime: 120s
        consecutive5xxErrors: 10
        interval: 5s
      tls:
        mode: ISTIO_MUTUAL
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Run result

> while($true){kubectl exec --namespace=sample -it $(kubectl get pod -l "app=webapp" --namespace=sample -o jsonpath='{.items[0].metadata.name}') -c webapp -- curl -v http://default.greeting.global | Select-String "hello"}

Hello World! - Admiral!!
Hello World! - Admiral!!
Hello World! - Admiral!!
Hello World! - Admiral!!
Hello World! - Admiral!!
Hello World! - Admiral!!
Hello World! - Admiral!!
Hello World! - Admiral!!
Hello World! - Admiral!!
Hello World! - Admiral!!
Hello World! - Admiral!!
Hello World! - Admiral!!
Hello World! - Admiral!!
Hello World! - Admiral!!

@kenakamu
Copy link
Author

I also see error in kiali
Annotation 2020-05-17 230636

@kenakamu
Copy link
Author

By the way, if I ran same test in westcluster, then i only get reply from westcluster local.

> while($true){kubectl exec --namespace=sample -it $(kubectl get pod -l "app=webapp" --namespace=sample -o jsonpath='{.items[0].metadata.name}') -c webapp -- curl -v http://default.greeting.global | Select-String "hello"}

Remote cluster says: Hello World! - Admiral!!
Remote cluster says: Hello World! - Admiral!!
Remote cluster says: Hello World! - Admiral!!
Remote cluster says: Hello World! - Admiral!!
Remote cluster says: Hello World! - Admiral!!
Remote cluster says: Hello World! - Admiral!!
Remote cluster says: Hello World! - Admiral!!

@kenakamu
Copy link
Author

One additional info is that if I delete greeting deployment from eastcluster, it failedover to westcluster

Hello World! - Admiral!!
Hello World! - Admiral!!
Hello World! - Admiral!!
Hello World! - Admiral!!
Hello World! - Admiral!!
Hello World! - Admiral!!
Hello World! - Admiral!!
Hello World! - Admiral!!
Hello World! - Admiral!!
Remote cluster says: Hello World! - Admiral!!
Remote cluster says: Hello World! - Admiral!!
Remote cluster says: Hello World! - Admiral!!
Remote cluster says: Hello World! - Admiral!!
Remote cluster says: Hello World! - Admiral!!
Remote cluster says: Hello World! - Admiral!!

And when I redeploy greeting it auto failback to local one. So HA works as expected, only LB is different from what document says

@aattuluri
Copy link
Contributor

Ok, thanks for sharing all this information. Looks like GTP load balancer settings aren't getting applied, which makes me suspect that its not being used.

Regarding kiali error, thats weird because istio-multicluster-ingressgateway should be present with multicluster install, you can simply run kubectl get gateway -n istio-system to double that.

Can you also share the following information:
i) Which cluster are you creating the GTP in? (try creating it in east cluster if you are putting it in a different cluster)
Also, you dont need /* in GTP, you can remove that.

ii) Attach the admiral logs
kubectl logs <admiral-pod> -n admiral > admiral_logs.log

@kenakamu
Copy link
Author

The ingress actually exists and the failover works anyway.

> kubectl get gateway -n istio-system
NAME                                AGE
istio-ingressgateway                32h
istio-multicluster-ingressgateway   32h

I applied GTP in eastcluster today but nothing change.

> k get gtp -A -o yaml --context eastcluster
apiVersion: v1
items:
- apiVersion: admiral.io/v1alpha1
  kind: GlobalTrafficPolicy
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"admiral.io/v1alpha1","kind":"GlobalTrafficPolicy","metadata":{"annotations":{},"labels":{"identity":"greeting"},"name":"gtp-service1","namespace":"sample"},"spec":{"policy":[{"dns":"default.greeting.global","lbType":1,"target":[{"region":"westus","weight":50},{"region":"eastus","weight":50}]}]}}
    creationTimestamp: "2020-05-18T00:53:03Z"
    generation: 1
    labels:
      identity: greeting
    name: gtp-service1
    namespace: sample
    resourceVersion: "437511"
    selfLink: /apis/admiral.io/v1alpha1/namespaces/sample/globaltrafficpolicies/gtp-service1
    uid: 49f417d3-0a33-4021-ae0e-3a96c4b755b5
  spec:
    policy:
    - dns: default.greeting.global
      lbType: 1
      target:
      - region: westus
        weight: 50
      - region: eastus
        weight: 50
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

admiral_logs.admiral.log
admiral_logs.eastcluster.log
admiral_logs.westcluster.log

When I create my own service entry, the load balancing works as expected.

apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: greeting.sample.global
  namespace: sample
spec:
  addresses:
  - 240.0.10.3
  endpoints:
  - address: greeting.sample.svc.cluster.local
    ports:
      http: 80
  - address: 13.87.226.150
    ports:
      http: 15443
  hosts:
  - greeting.sample.global
  location: MESH_INTERNAL
  ports:
  - name: http
    number: 80
    protocol: http
  resolution: DNS
> while($true){kubectl exec --namespace=sample -it $(kubectl get pod -l "app=webapp" --namespace=sample -o jsonpath='{.items[0].metadata.name}') -c webapp -- curl -v http://greeting.sample.global | Select-String "Hello"}

Hello World! - Admiral!!
Remote cluster says: Hello World! - Admiral!!
Hello World! - Admiral!!
Hello World! - Admiral!!
Remote cluster says: Hello World! - Admiral!!
Hello World! - Admiral!!
Remote cluster says: Hello World! - Admiral!!

@aattuluri
Copy link
Contributor

While I am looking into this, there are a couple of points I wanted to make
i) You do not need to run admiral in your east and west clusters
ii) You need to create the dependency record in central (as thats your admiral control plane cluster), I see that it was created in east.

Also, from the admiral logs I clearly see that the gtp is being skipped, looking into it further.

time="2020-05-18T00:53:03Z" level=info msg="op=Added type=trafficpolicy name=gtp-service1 cluster= message=received"
time="2020-05-18T00:53:03Z" level=info msg="op=Added type=trafficpolicy name=gtp-service1 cluster=, e=Skipping, no matched deployments"

@kenakamu
Copy link
Author

kenakamu commented May 18, 2020

Thanks for answer. The issue is that it's not clear to me how I should setup the cluster with Admiral only cluster and two istio clusters. As sample only explain how to do it by two clusters with admiral and first cluster co-exists.

I cannot create some resource due to lack of namespaces or CRD.

Could you give me a bit clearer instruction how to install with multiple cluster?

For example, if I run remotecluster.yaml only, then it misses Dependency CRD. and if I try to create GTP in admiral cluster, I cannot do it as sample ns is not exists.

@kenakamu
Copy link
Author

I deleted admiral ns from both east and west cluster. Deleted dependency from eastcluster and re-create in admiral cluster. The system works same as before. The log still says no matching deployment.

time="2020-05-18T05:38:11Z" level=info msg="op=Updated type=trafficpolicy
name=gtp-service1 cluster= message=received"
time="2020-05-18T05:38:11Z" level=info msg="op=Updated type=trafficpolicy
name=gtp-service1 cluster=, e=Skipping, no matched deployments"
time="2020-05-18T05:40:11Z" level=info msg="op=Updated type=trafficpolicy
name=gtp-service1 cluster= message=received"
time="2020-05-18T05:40:11Z" level=info msg="op=Updated type=trafficpolicy
name=gtp-service1 cluster=, e=Skipping, no matched deployments"

@aattuluri
Copy link
Contributor

I found the issue, I think the example deployment lost a label in recent edits before the release.

To fix it in your clusters, modify the greeting deployment to add a label in both east and west clusters:
kubectl label deployment greeting -n sample identity=greeting

For example greeting deployment will look like below after the above command execution:

...
apiVersion: apps/v1
kind: Deployment
metadata:
  name: greeting
  namespace: sample
  labels:
    identity: greeting
spec:
  replicas: 1
  selector:
    matchLabels:
      app: greeting
  template:
.....

Once you do that, you should see the DestinationRule updated with locality load balancing section.

PS:
Basically deployment match happens with deployment labels and the example didn't have a label on deployment. I fixed the release artifact now to include the right label. Thanks for reporting this bug.

@kenakamu
Copy link
Author

Cool. Do we need to add that to remote sample as well?

@aattuluri
Copy link
Contributor

Thanks for answer. The issue is that it's not clear to me how I should setup the cluster with Admiral only cluster and two istio clusters. As sample only explain how to do it by two clusters with admiral and first cluster co-exists.

I cannot create some resource due to lack of namespaces or CRD.

Could you give me a bit clearer instruction how to install with multiple cluster?

For example, if I run remotecluster.yaml only, then it misses Dependency CRD. and if I try to create GTP in admiral cluster, I cannot do it as sample ns is not exists.

Excellent point, we will modify the current install structure to accommodate this scenario (let me know if you want to contribute). We have overrides at Intuit to achieve a similar scenario.

Basically,
i) Basically dependencies live in Admiral cluster
ii) GTPs and Istio resources live in Admiral monitored clusters

@aattuluri
Copy link
Contributor

Cool. Do we need to add that to remote sample as well?

Yep

@kenakamu
Copy link
Author

and do you know when the change is picked up? It generates dr as expected. But when I chnage GTP, admiral won't update dr for 10 mins already

@aattuluri
Copy link
Contributor

The sync interval configured for this example admiral installation is 20s, so shouldn't be taking that long.

Can you make few more edits and share logs if you see similar times?

Are you observing traffic pattern as per the generated DR?

@kenakamu
Copy link
Author

Sorry it seems my environment didnt enough resource. After restart everything and add addtional node, it started working. Thanks a bunch.

@kenakamu
Copy link
Author

sorry for reopen this again.
I think we cannot set 0 for locality? This is the error I see when I set west to 0 and east to 100
time="2020-05-18T06:38:24Z" level=info msg="op=Update type=DestinationRule name=default.greeting.global-default-dr cluster=eastcluster, e=admission webhook "validation.istio.io" denied the request: configuration is invalid: 1 error occurred:\n\t* locality weight must be in range [1, 100]\n\n"

If i use 10:90 or 20:80, it updates as expected

@kenakamu kenakamu reopened this May 18, 2020
@aattuluri
Copy link
Contributor

@kenakamu I think this is a bug, the fix is to skip setting the locality with 0 weight when other value is 100. Let me know if you are interested in submitting a PR.

@aattuluri
Copy link
Contributor

Fixed with #111

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants