Skip to content
This repository has been archived by the owner on Oct 24, 2023. It is now read-only.

Unable to mount a volume on VMSS #3838

Closed
amankohli opened this issue Sep 17, 2020 · 8 comments
Closed

Unable to mount a volume on VMSS #3838

amankohli opened this issue Sep 17, 2020 · 8 comments
Labels
bug Something isn't working

Comments

@amankohli
Copy link

Unable to attach the volume to VMSS node. During upgrade using aks-engine v0.54.1 upgrading a cluster from 1.15.11 to 1.15.12:Controller Manager logs show disk attach is failing and causing the azure RP to throttle:

I0917 03:48:27.3403511attacher.go:89] Attach volume "/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-38a21327-af94-11ea-8b23-00224803698a" to instance "k8s-node-11577350-vmss00001c" failed with disk(/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-38a21327-af94-11ea-8b23-00224803698a) already attached to node(/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-node-11577350-vmss/virtualMachines/k8s-node-11577350-vmss_47), could not be attached to node(k8s-node-11577350-vmss00001c)
E0917 03:48:27.3407211nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/azure-disk//subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-38a21327-af94-11ea-8b23-00224803698a podName: nodeName:}" failed. No retries permitted until 2020-09-1703:48:27.840627696 +0000 UTC m=+34.330095570 (durationBeforeRetry 500ms). Error: "AttachVolume.Attach failed for volume "pvc-38a21327-af94-11ea-8b23-00224803698a" (UniqueName: "kubernetes.io/azure-disk//subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-38a21327-af94-11ea-8b23-00224803698a") from node "k8s-node-11577350-vmss00001c" : disk(/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-38a21327-af94-11ea-8b23-00224803698a) already attached to node(/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-node-11577350-vmss/virtualMachines/k8s-node-11577350-vmss_47), could not be attached to node(k8s-node-11577350-vmss00001c)"
I0917 03:48:27.3546771azure_controller_common.go:120] found dangling volume /subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-5bdf4361-a17d-11ea-b922-00224803698a attached to node k8s-node-11577350-vmss_62
I0917 03:48:27.3550911attacher.go:89] Attach volume "/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-5bdf4361-a17d-11ea-b922-00224803698a" to instance "k8s-node-11577350-vmss00001n" failed with disk(/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-5bdf4361-a17d-11ea-b922-00224803698a) already attached to node(/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-node-11577350-vmss/virtualMachines/k8s-node-11577350-vmss_62), could not be attached to node(k8s-node-11577350-vmss00001n)
E0917 03:48:27.3574351nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/azure-disk//subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-5bdf4361-a17d-11ea-b922-00224803698a podName: nodeName:}" failed. No retries permitted until 2020-09-1703:48:27.857387257 +0000 UTC m=+34.346855231 (durationBeforeRetry 500ms). Error: "AttachVolume.Attach failed for volume "pvc-5bdf4361-a17d-11ea-b922-00224803698a" (UniqueName: "kubernetes.io/azure-disk//subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-5bdf4361-a17d-11ea-b922-00224803698a") from node "k8s-node-11577350-vmss00001n" : disk(/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-5bdf4361-a17d-11ea-b922-00224803698a) already attached to node(/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-node-11577350-vmss/virtualMachines/k8s-node-11577350-vmss_62), could not be attached to node(k8s-node-11577350-vmss00001n)"
I0917 03:48:27.3599091azure_controller_common.go:120] found dangling volume /subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-facbf821-9ad0-4afe-a012-1ca2e2470628 attached to node k8s-node-11577350-vmss_62
I0917 03:48:27.3600471attacher.go:89] Attach volume "/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-facbf821-9ad0-4afe-a012-1ca2e2470628" to instance "k8s-node-11577350-vmss00001o" failed with disk(/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-facbf821-9ad0-4afe-a012-1ca2e2470628) already attached to node(/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-node-11577350-vmss/virtualMachines/k8s-node-11577350-vmss_62), could not be attached to node(k8s-node-11577350-vmss00001o)
E0917 03:48:27.3614121nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/azure-disk//subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-facbf821-9ad0-4afe-a012-1ca2e2470628 podName: nodeName:}" failed. No retries permitted until 2020-09-1703:48:27.860236367 +0000 UTC m=+34.349704241 (durationBeforeRetry 500ms). Error: "AttachVolume.Attach failed for volume "pvc-facbf821-9ad0-4afe-a012-1ca2e2470628" (UniqueName: "kubernetes.io/azure-disk//subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-facbf821-9ad0-4afe-a012-1ca2e2470628") from node "k8s-node-11577350-vmss00001o" : disk(/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-facbf821-9ad0-4afe-a012-1ca2e2470628) already attached to node(/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-node-11577350-vmss/virtualMachines/k8s-node-11577350-vmss_62), could not be attached to node(k8s-node-11577350-vmss00001o)"
I0917 03:48:27.3657441azure_controller_common.go:120] found dangling volume /subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-2a257211-abfd-11ea-8b23-00224803698a attached to node k8s-node-11577350-vmss_62

​[Yesterday 9:36 PM] Aman Kohli (NetApp)

I0917 03:48:27.2978121event.go:258] Event(v1.ObjectReference{Kind:"Pod", Namespace:"monitoring", Name:"kube-alertmanager-0", UID:"6466aa64-6656-4154-93a6-3197cd8bd9ac", APIVersion:"v1", ResourceVersion:"106026856", FieldPath:""}): type: 'Warning' reason: 'FailedAttachVolume'AttachVolume.Attach failed for volume "pvc-aab21384-2120-41bd-94fb-1368e062528b" : azure - cloud provider rate limited(read) for operation:GetDisk
I0917 03:48:27.3402521azure_controller_common.go:120] found dangling volume /subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-38a21327-af94-11ea-8b23-00224803698a attached to node k8s-node-11577350-vmss_47

Expected behavior
The VMSS nodes should mount the volume

AKS Engine version
aks-engine v0.54.1

Kubernetes version
1.15.11

Additional context

@amankohli amankohli added the bug Something isn't working label Sep 17, 2020
@jsturtevant
Copy link
Contributor

Can you do a describe on a pod that is failing with the disk attach?

I0917 03:48:27.3403511attacher.go:89] Attach volume failed with disk .... already attached to node.... could not be attached to node(k8s-node-11577350-vmss00001c)

Looks similar to some of the output in kubernetes/kubernetes#90749 and kubernetes/kubernetes#81266

@AndyZhang any thoughts? It seems the volume is not being cleaned properly.

@kebeckwith
Copy link

Thanks @jsturtevant , I work with the original bug poster.

Here is an example of a pod description:

`Name: sde-prometheus-server-0
Namespace: default
Priority: 0
Node: k8s-node-11577350-vmss00001o/10.5.0.42
Start Time: Wed, 09 Sep 2020 16:55:52 +0000
Labels: app=sde-prometheus
chart=sde-prometheus-0.11.1
component=sde-prometheus-server
controller-revision-hash=sde-prometheus-server-59588c8cfd
heritage=Tiller
release=sde-prometheus
statefulset.kubernetes.io/pod-name=sde-prometheus-server-0
Annotations:
Status: Pending
IP:
IPs:
Controlled By: StatefulSet/sde-prometheus-server
Init Containers:
init-chown-data:
Container ID:
Image: busybox:latest
Image ID:
Port:
Host Port:
Command:
chown
-R
65534:65534
/data
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
Mounts:
/data from storage-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from sde-prometheus-server-token-dnrlq (ro)
Containers:
sde-prometheus-server-configmap-reload:
Container ID:
Image: registry.qstack.com/qstack/configmap-reload:v0.3.0
Image ID:
Port:
Host Port:
Args:
--volume-dir=/etc/config
--webhook-url=http://127.0.0.1:9090/-/reload
--volume-dir=/etc/config/alerting-rules
--volume-dir=/etc/config/recording-rules
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
Mounts:
/etc/config from config-volume (ro)
/etc/config/alerting-rules from configmap-reload-prometheus-alerting-rules (ro)
/etc/config/recording-rules from configmap-reload-prometheus-recording-rules (ro)
/var/run/secrets/kubernetes.io/serviceaccount from sde-prometheus-server-token-dnrlq (ro)
sde-prometheus-server:
Container ID:
Image: registry.qstack.com/qstack/prometheus:v2.15.1
Image ID:
Port: 9090/TCP
Host Port: 0/TCP
Args:
--storage.tsdb.retention.time=30d
--config.file=/etc/config/prometheus.yml
--storage.tsdb.path=/data
--web.console.libraries=/etc/prometheus/console_libraries
--web.console.templates=/etc/prometheus/consoles
--web.enable-lifecycle
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 500m
memory: 4000Mi
Requests:
cpu: 500m
memory: 4000Mi
Liveness: http-get http://:9090/-/healthy delay=30s timeout=30s period=10s #success=1 #failure=3
Readiness: http-get http://:9090/-/ready delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:
Mounts:
/data from storage-volume (rw)
/etc/config from config-volume (rw)
/etc/config/alerting-rules from sde-prometheus-server-prometheus-alerting-rules (ro)
/etc/config/recording-rules from sde-prometheus-server-prometheus-recording-rules (ro)
/var/run/secrets/kubernetes.io/serviceaccount from sde-prometheus-server-token-dnrlq (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
storage-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: storage-volume-sde-prometheus-server-0
ReadOnly: false
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: sde-prometheus-server
Optional: false
configmap-reload-prometheus-alerting-rules:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: prometheus-alerting-rules
Optional: false
configmap-reload-prometheus-recording-rules:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: prometheus-recording-rules
Optional: false
sde-prometheus-server-prometheus-alerting-rules:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: prometheus-alerting-rules
Optional: false
sde-prometheus-server-prometheus-recording-rules:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: prometheus-recording-rules
Optional: false
prometheus-alerting-rules:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: prometheus-alerting-rules
Optional: false
prometheus-recording-rules:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: prometheus-recording-rules
Optional: false
sde-prometheus-server-token-dnrlq:
Type: Secret (a volume populated by a Secret)
SecretName: sde-prometheus-server-token-dnrlq
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Warning FailedAttachVolume 31m (x10 over 37m) attachdetach-controller AttachVolume.Attach failed for volume "pvc-97b4d2ff-2e72-45cb-a9ee-84dcd0075214" : disk(/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-97b4d2ff-2e72-45cb-a9ee-84dcd0075214) already attached to node(/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-node-11577350-vmss/virtualMachines/k8s-node-11577350-vmss_62), could not be attached to node(k8s-node-11577350-vmss00001o)
Warning FailedAttachVolume 27m (x1266 over 37m) attachdetach-controller AttachVolume.Attach failed for volume "pvc-97b4d2ff-2e72-45cb-a9ee-84dcd0075214" : azure - cloud provider rate limited(read) for operation:GetDisk
Warning FailedMount 27s (x5110 over 8d) kubelet, k8s-node-11577350-vmss00001o Unable to mount volumes for pod "sde-prometheus-server-0_default(44392419-2261-4e5c-bfe4-7b150d69dc22)": timeout expired waiting for volumes to attach or mount for pod "default"/"sde-prometheus-server-0". list of unmounted volumes=[storage-volume]. list of unattached volumes=[storage-volume config-volume configmap-reload-prometheus-alerting-rules configmap-reload-prometheus-recording-rules sde-prometheus-server-prometheus-alerting-rules sde-prometheus-server-prometheus-recording-rules prometheus-alerting-rules prometheus-recording-rules sde-prometheus-server-token-dnrlq]`

This example is from a log from today:

I0917 17:33:28.178263 1 attacher.go:89] Attach volume "/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-97b4d2ff-2e72-45cb-a9ee-84dcd0075214" to instance "k8s-node-11577350-vmss00001o" failed with disk(/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-97b4d2ff-2e72-45cb-a9ee-84dcd0075214) already attached to node(/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-node-11577350-vmss/virtualMachines/k8s-node-11577350-vmss_62), could not be attached to node(k8s-node-11577350-vmss00001o) E0917 17:33:28.178485 1 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/azure-disk//subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-97b4d2ff-2e72-45cb-a9ee-84dcd0075214 podName: nodeName:}" failed. No retries permitted until 2020-09-17 17:33:28.678414064 +0000 UTC m=+690.403197189 (durationBeforeRetry 500ms). Error: "AttachVolume.Attach failed for volume \"pvc-97b4d2ff-2e72-45cb-a9ee-84dcd0075214\" (UniqueName: \"kubernetes.io/azure-disk//subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-97b4d2ff-2e72-45cb-a9ee-84dcd0075214\") from node \"k8s-node-11577350-vmss00001o\" : disk(/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-97b4d2ff-2e72-45cb-a9ee-84dcd0075214) already attached to node(/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-node-11577350-vmss/virtualMachines/k8s-node-11577350-vmss_62), could not be attached to node(k8s-node-11577350-vmss00001o)" I0917 17:33:28.178539 1 event.go:258] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"sde-prometheus-server-0", UID:"44392419-2261-4e5c-bfe4-7b150d69dc22", APIVersion:"v1", ResourceVersion:"103906288", FieldPath:""}): type: 'Warning' reason: 'FailedAttachVolume' AttachVolume.Attach failed for volume "pvc-97b4d2ff-2e72-45cb-a9ee-84dcd0075214" : disk(/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/disks/k8seastus2euapdc-dynamic-pvc-97b4d2ff-2e72-45cb-a9ee-84dcd0075214) already attached to node(/subscriptions/2f495c46-73b1-463c-ae90-dae28e3880ef/resourceGroups/anf.dc.mgmt.eastus2euap.rg/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-node-11577350-vmss/virtualMachines/k8s-node-11577350-vmss_62), could not be attached to node(k8s-node-11577350-vmss00001o)

I agree, this looks a lot like those two bugs you linked.

Thanks!

Kevin

@kebeckwith
Copy link

Also, I see the fix for this is in 1.15.4, we are just now upgrading to 1.15.12 so we are a little ways from that. Is there anything we can do manually to get the disks cleaned up to unblock us here? Any way we can manually detach the "dangling" disks to fix the state of the cluster?

@jackfrancis
Copy link
Member

@andyzhangx is it safe to perform this manual operation for every dangling disk that is still detached?

https://docs.microsoft.com/en-us/azure/virtual-machines/linux/detach-disk#detach-a-data-disk-using-azure-cli

@jsturtevant
Copy link
Contributor

it looks like kubernetes/kubernetes#90749 did not make it into 1.15 because it is out of support: kubernetes/kubernetes#90800

@kebeckwith
Copy link

@jackfrancis and @jsturtevant Thank you both for your responses. I did end up going ahead and taking down the kube-controller-manager pods and working through the logs to find all mentions of dangling disks and manually detaching them from their respective nodes. This actually mitigated the issue to allow kube-controller-manager to do it's thing and attach the disks where they needed to go and all pods on the cluster are now running

@jackfrancis
Copy link
Member

@kebeckwith Thank you so much for reporting back and sharing your mitigation steps to help other users! :)

@andyzhangx
Copy link
Contributor

andyzhangx commented Sep 18, 2020

here is the dangling error fix on VMSS, manually detach disk always works.

k8s version fixed version
v1.14 only hotfixed with image mcr.microsoft.com/oss/kubernetes/hyperkube:v1.14.8-hotfix.20200529.1
v1.15 only hotfixed with image mcr.microsoft.com/oss/kubernetes/hyperkube:v1.15.11-hotfix.20200529.1, mcr.microsoft.com/oss/kubernetes/hyperkube:v1.15.12-hotfix.20200603
v1.16 1.16.10 (also hotfixed with image mcr.microsoft.com/oss/kubernetes/hyperkube:v1.16.9-hotfix.20200529.1)
v1.17 1.17.6
v1.18 1.18.3
v1.19 1.19.0

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants