diff --git a/v1.23/daocloud/PRODUCT.yaml b/v1.23/daocloud/PRODUCT.yaml new file mode 100644 index 0000000000..5a6b8cf0bb --- /dev/null +++ b/v1.23/daocloud/PRODUCT.yaml @@ -0,0 +1,9 @@ +vendor: DaoCloud +name: DaoCloud Enterprise +version: v4.0.9-35552 +website_url: https://www.daocloud.io/dce +documentation_url: https://download.daocloud.io/DaoCloud_Enterprise/DaoCloud_Enterprise/4.0.9 +product_logo_url: https://github.com/dasu23/DC-Jenkinsfile/raw/master/DaoCloud.svg +type: distribution +description: 'Daocloud helps you provide a reliable and consistent basic support environment to meet the high SLA requirements of enterprise critical applications +.' diff --git a/v1.23/daocloud/README.md b/v1.23/daocloud/README.md new file mode 100644 index 0000000000..77c2cb8207 --- /dev/null +++ b/v1.23/daocloud/README.md @@ -0,0 +1,27 @@ +DaoCloud Enterprise + +DaoCloud Enterprise is a platform based on Kubernetes which developed by [DaoCloud](https://www.daocloud.io). + +## How to Reproduce + +First install DaoCloud Enterprise 4.0.9, which is based on Kubernetes 1.23.3. To install DaoCloud Enterprise, run the following commands on CentOS 7.7 System: +``` +sudo su +curl -L https://dce.daocloud.io/DaoCloud_Enterprise/4.0.9os-requirements > ./os-requirements +chmod +x ./os-requirements +./os-requirements +bash -c "$(docker run -i --rm daocloud.io/daocloud/dce:4.0.9-35552 install)" +``` +To add more nodes to the cluster, the user need log into DaoCloud Enterprise control panel and follow instructions under node management section. + +After the installation, run ```docker exec -it `docker ps | grep dce-kube-controller | awk '{print$1}'` bash``` to enter the DaoCloud Enterprise Kubernetes controller container. + +The standard tool for running these tests is +[Sonobuoy](https://github.com/heptio/sonobuoy), and the standard way to run +these in your cluster is with `curl -L https://github.com/raw/cncf/k8s-conformance/master/sonobuoy-conformance.yaml | kubectl apply -f -`. + +Watch Sonobuoy's logs with `kubectl logs -f -n sonobuoy sonobuoy` and wait for +the line `no-exit was specified, sonobuoy is now blocking`. At this point, use +`kubectl cp` to bring the results to your local machine, expand the tarball, and +retain the 3 files `plugins/e2e/results/{e2e.log,junit.xml,version.txt}`, which will +be included in your submission. \ No newline at end of file diff --git a/v1.23/daocloud/e2e.log b/v1.23/daocloud/e2e.log new file mode 100644 index 0000000000..aca166e187 --- /dev/null +++ b/v1.23/daocloud/e2e.log @@ -0,0 +1,15770 @@ +I0803 06:16:19.374652 21 e2e.go:132] Starting e2e run "f3f10412-cf7f-4e50-98c6-dad5df587000" on Ginkgo node 1 +{"msg":"Test Suite starting","total":346,"completed":0,"skipped":0,"failed":0} +Running Suite: Kubernetes e2e suite +=================================== +Random Seed: 1659507379 - Will randomize all specs +Will run 346 of 7042 specs + +Aug 3 06:16:22.672: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 06:16:22.676: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable +Aug 3 06:16:22.713: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready +Aug 3 06:16:22.811: INFO: 67 / 67 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) +Aug 3 06:16:22.811: INFO: expected 17 pod replicas in namespace 'kube-system', 17 are Running and Ready. +Aug 3 06:16:22.811: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start +Aug 3 06:16:22.834: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'calico-node' (0 seconds elapsed) +Aug 3 06:16:22.834: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'dce-engine' (0 seconds elapsed) +Aug 3 06:16:22.834: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'dce-parcel-agent' (0 seconds elapsed) +Aug 3 06:16:22.834: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'dce-parcel-server' (0 seconds elapsed) +Aug 3 06:16:22.834: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'dce-uds-host-driver' (0 seconds elapsed) +Aug 3 06:16:22.834: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) +Aug 3 06:16:22.834: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'node-local-dns' (0 seconds elapsed) +Aug 3 06:16:22.834: INFO: e2e test version: v1.23.3 +Aug 3 06:16:22.837: INFO: kube-apiserver version: v1.23.3 +Aug 3 06:16:22.837: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 06:16:22.850: INFO: Cluster IP family: ipv4 +SS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:16:22.851: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename pod-network-test +W0803 06:16:22.959189 21 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ +Aug 3 06:16:22.959: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-270 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Aug 3 06:16:22.962: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Aug 3 06:16:23.039: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:16:25.052: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:16:27.051: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:16:29.050: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 06:16:31.055: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 06:16:33.052: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 06:16:35.051: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 06:16:37.049: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 06:16:39.050: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 06:16:41.057: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 06:16:43.063: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 06:16:45.054: INFO: The status of Pod netserver-0 is Running (Ready = true) +Aug 3 06:16:45.072: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Aug 3 06:16:51.120: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Aug 3 06:16:51.120: INFO: Breadth first check of 172.29.31.113 on host 10.6.213.40... +Aug 3 06:16:51.126: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.29.175.6:9080/dial?request=hostname&protocol=udp&host=172.29.31.113&port=8081&tries=1'] Namespace:pod-network-test-270 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 06:16:51.126: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 06:16:51.127: INFO: ExecWithOptions: Clientset creation +Aug 3 06:16:51.127: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/pod-network-test-270/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.29.175.6%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D172.29.31.113%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) +Aug 3 06:16:51.332: INFO: Waiting for responses: map[] +Aug 3 06:16:51.332: INFO: reached 172.29.31.113 after 0/1 tries +Aug 3 06:16:51.332: INFO: Breadth first check of 172.29.175.5 on host 10.6.213.50... +Aug 3 06:16:51.338: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.29.175.6:9080/dial?request=hostname&protocol=udp&host=172.29.175.5&port=8081&tries=1'] Namespace:pod-network-test-270 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 06:16:51.338: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 06:16:51.339: INFO: ExecWithOptions: Clientset creation +Aug 3 06:16:51.340: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/pod-network-test-270/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.29.175.6%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D172.29.175.5%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) +Aug 3 06:16:51.523: INFO: Waiting for responses: map[] +Aug 3 06:16:51.523: INFO: reached 172.29.175.5 after 0/1 tries +Aug 3 06:16:51.523: INFO: Going to retry 0 out of 2 pods.... +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:16:51.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-270" for this suite. + +• [SLOW TEST:28.894 seconds] +[sig-network] Networking +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 + Granular Checks: Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 + should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":346,"completed":1,"skipped":2,"failed":0} +SS +------------------------------ +[sig-node] PreStop + should call prestop when killing a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:16:51.745: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename prestop +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 +[It] should call prestop when killing a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating server pod server in namespace prestop-3753 +STEP: Waiting for pods to come up. +STEP: Creating tester pod tester in namespace prestop-3753 +STEP: Deleting pre-stop pod +Aug 3 06:17:06.944: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true +} +STEP: Deleting the server pod +[AfterEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:17:06.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "prestop-3753" for this suite. + +• [SLOW TEST:15.237 seconds] +[sig-node] PreStop +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 + should call prestop when killing a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":346,"completed":2,"skipped":4,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:17:06.982: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 +[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 06:17:07.037: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Aug 3 06:17:07.064: INFO: The status of Pod pod-logs-websocket-9bb568e1-3495-4ccb-9a76-7c105fe1f4e8 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:17:09.079: INFO: The status of Pod pod-logs-websocket-9bb568e1-3495-4ccb-9a76-7c105fe1f4e8 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:17:11.079: INFO: The status of Pod pod-logs-websocket-9bb568e1-3495-4ccb-9a76-7c105fe1f4e8 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:17:13.078: INFO: The status of Pod pod-logs-websocket-9bb568e1-3495-4ccb-9a76-7c105fe1f4e8 is Running (Ready = true) +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:17:13.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-4865" for this suite. + +• [SLOW TEST:6.184 seconds] +[sig-node] Pods +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":346,"completed":3,"skipped":18,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:17:13.167: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename endpointslice +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: referencing a single matching pod +STEP: referencing matching pods with named port +STEP: creating empty Endpoints and EndpointSlices for no matching Pods +STEP: recreating EndpointSlices after they've been deleted +Aug 3 06:17:38.483: INFO: EndpointSlice for Service endpointslice-9048/example-named-port not found +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:17:48.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-9048" for this suite. + +• [SLOW TEST:35.364 seconds] +[sig-network] EndpointSlice +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":346,"completed":4,"skipped":30,"failed":0} +SSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should validate Replicaset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:17:48.531: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +[It] should validate Replicaset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create a Replicaset +STEP: Verify that the required pods have come up. +Aug 3 06:17:48.635: INFO: Pod name sample-pod: Found 0 pods out of 1 +Aug 3 06:17:53.647: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: Getting /status +Aug 3 06:17:53.660: INFO: Replicaset test-rs has Conditions: [] +STEP: updating the Replicaset Status +Aug 3 06:17:53.675: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the ReplicaSet status to be updated +Aug 3 06:17:53.679: INFO: Observed &ReplicaSet event: ADDED +Aug 3 06:17:53.679: INFO: Observed &ReplicaSet event: MODIFIED +Aug 3 06:17:53.679: INFO: Observed &ReplicaSet event: MODIFIED +Aug 3 06:17:53.679: INFO: Observed &ReplicaSet event: MODIFIED +Aug 3 06:17:53.679: INFO: Found replicaset test-rs in namespace replicaset-9724 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Aug 3 06:17:53.679: INFO: Replicaset test-rs has an updated status +STEP: patching the Replicaset Status +Aug 3 06:17:53.679: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Aug 3 06:17:53.687: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} +STEP: watching for the Replicaset status to be patched +Aug 3 06:17:53.690: INFO: Observed &ReplicaSet event: ADDED +Aug 3 06:17:53.690: INFO: Observed &ReplicaSet event: MODIFIED +Aug 3 06:17:53.690: INFO: Observed &ReplicaSet event: MODIFIED +Aug 3 06:17:53.690: INFO: Observed &ReplicaSet event: MODIFIED +Aug 3 06:17:53.690: INFO: Observed replicaset test-rs in namespace replicaset-9724 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Aug 3 06:17:53.690: INFO: Observed &ReplicaSet event: MODIFIED +Aug 3 06:17:53.690: INFO: Found replicaset test-rs in namespace replicaset-9724 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } +Aug 3 06:17:53.690: INFO: Replicaset test-rs has a patched status +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:17:53.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-9724" for this suite. + +• [SLOW TEST:5.181 seconds] +[sig-apps] ReplicaSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should validate Replicaset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":346,"completed":5,"skipped":37,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:17:53.712: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service nodeport-service with the type=NodePort in namespace services-7576 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service +STEP: creating service externalsvc in namespace services-7576 +STEP: creating replication controller externalsvc in namespace services-7576 +I0803 06:17:53.841475 21 runners.go:193] Created replication controller with name: externalsvc, namespace: services-7576, replica count: 2 +I0803 06:17:56.893116 21 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0803 06:17:59.893421 21 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the NodePort service to type=ExternalName +Aug 3 06:17:59.939: INFO: Creating new exec pod +Aug 3 06:18:06.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-7576 exec execpod9blkg -- /bin/sh -x -c nslookup nodeport-service.services-7576.svc.cluster.local' +Aug 3 06:18:06.701: INFO: stderr: "+ nslookup nodeport-service.services-7576.svc.cluster.local\n" +Aug 3 06:18:06.701: INFO: stdout: "Server:\t\t172.31.0.10\nAddress:\t172.31.0.10#53\n\nnodeport-service.services-7576.svc.cluster.local\tcanonical name = externalsvc.services-7576.svc.cluster.local.\nName:\texternalsvc.services-7576.svc.cluster.local\nAddress: 172.31.242.255\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-7576, will wait for the garbage collector to delete the pods +Aug 3 06:18:06.783: INFO: Deleting ReplicationController externalsvc took: 16.46576ms +Aug 3 06:18:06.884: INFO: Terminating ReplicationController externalsvc pods took: 100.817975ms +Aug 3 06:18:11.026: INFO: Cleaning up the NodePort to ExternalName test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:18:11.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-7576" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:17.362 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":346,"completed":6,"skipped":65,"failed":0} +[sig-node] InitContainer [NodeConformance] + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:18:11.074: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Aug 3 06:18:11.143: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:18:16.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-4630" for this suite. + +• [SLOW TEST:5.511 seconds] +[sig-node] InitContainer [NodeConformance] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":346,"completed":7,"skipped":65,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:18:16.596: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Aug 3 06:18:16.709: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7187f8d9-d9a9-4c2f-9ce7-f1ebcfdc8da3" in namespace "downward-api-8978" to be "Succeeded or Failed" +Aug 3 06:18:16.771: INFO: Pod "downwardapi-volume-7187f8d9-d9a9-4c2f-9ce7-f1ebcfdc8da3": Phase="Pending", Reason="", readiness=false. Elapsed: 61.605806ms +Aug 3 06:18:18.787: INFO: Pod "downwardapi-volume-7187f8d9-d9a9-4c2f-9ce7-f1ebcfdc8da3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078061588s +Aug 3 06:18:20.795: INFO: Pod "downwardapi-volume-7187f8d9-d9a9-4c2f-9ce7-f1ebcfdc8da3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085764833s +Aug 3 06:18:22.810: INFO: Pod "downwardapi-volume-7187f8d9-d9a9-4c2f-9ce7-f1ebcfdc8da3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.100263892s +STEP: Saw pod success +Aug 3 06:18:22.810: INFO: Pod "downwardapi-volume-7187f8d9-d9a9-4c2f-9ce7-f1ebcfdc8da3" satisfied condition "Succeeded or Failed" +Aug 3 06:18:22.826: INFO: Trying to get logs from node dce-10-6-213-50 pod downwardapi-volume-7187f8d9-d9a9-4c2f-9ce7-f1ebcfdc8da3 container client-container: +STEP: delete the pod +Aug 3 06:18:22.890: INFO: Waiting for pod downwardapi-volume-7187f8d9-d9a9-4c2f-9ce7-f1ebcfdc8da3 to disappear +Aug 3 06:18:22.900: INFO: Pod downwardapi-volume-7187f8d9-d9a9-4c2f-9ce7-f1ebcfdc8da3 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:18:22.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-8978" for this suite. + +• [SLOW TEST:6.330 seconds] +[sig-storage] Downward API volume +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":8,"skipped":111,"failed":0} +SSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:18:22.926: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test substitution in container's command +Aug 3 06:18:23.046: INFO: Waiting up to 5m0s for pod "var-expansion-15683c38-0df4-42ea-a6c0-7f19fa0b44cb" in namespace "var-expansion-885" to be "Succeeded or Failed" +Aug 3 06:18:23.060: INFO: Pod "var-expansion-15683c38-0df4-42ea-a6c0-7f19fa0b44cb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.0662ms +Aug 3 06:18:25.072: INFO: Pod "var-expansion-15683c38-0df4-42ea-a6c0-7f19fa0b44cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025719165s +Aug 3 06:18:27.084: INFO: Pod "var-expansion-15683c38-0df4-42ea-a6c0-7f19fa0b44cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038078288s +Aug 3 06:18:29.096: INFO: Pod "var-expansion-15683c38-0df4-42ea-a6c0-7f19fa0b44cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049814744s +STEP: Saw pod success +Aug 3 06:18:29.096: INFO: Pod "var-expansion-15683c38-0df4-42ea-a6c0-7f19fa0b44cb" satisfied condition "Succeeded or Failed" +Aug 3 06:18:29.103: INFO: Trying to get logs from node dce-10-6-213-50 pod var-expansion-15683c38-0df4-42ea-a6c0-7f19fa0b44cb container dapi-container: +STEP: delete the pod +Aug 3 06:18:29.181: INFO: Waiting for pod var-expansion-15683c38-0df4-42ea-a6c0-7f19fa0b44cb to disappear +Aug 3 06:18:29.186: INFO: Pod var-expansion-15683c38-0df4-42ea-a6c0-7f19fa0b44cb no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:18:29.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-885" for this suite. + +• [SLOW TEST:6.283 seconds] +[sig-node] Variable Expansion +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":346,"completed":9,"skipped":118,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + creating/deleting custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:18:29.210: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +[It] creating/deleting custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 06:18:29.535: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:18:30.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-9196" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":346,"completed":10,"skipped":145,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:18:30.639: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a watch on configmaps with a certain label +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: changing the label value of the configmap +STEP: Expecting to observe a delete notification for the watched object +Aug 3 06:18:30.826: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5329 ccf4f37f-ae1f-43c0-a922-3c02a87d4ee5 596162 0 2022-08-03 06:18:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 3 06:18:30.827: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5329 ccf4f37f-ae1f-43c0-a922-3c02a87d4ee5 596163 0 2022-08-03 06:18:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 3 06:18:30.830: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5329 ccf4f37f-ae1f-43c0-a922-3c02a87d4ee5 596164 0 2022-08-03 06:18:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time +STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements +STEP: changing the label value of the configmap back +STEP: modifying the configmap a third time +STEP: deleting the configmap +STEP: Expecting to observe an add notification for the watched object when the label value was restored +Aug 3 06:18:40.905: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5329 ccf4f37f-ae1f-43c0-a922-3c02a87d4ee5 596216 0 2022-08-03 06:18:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 3 06:18:40.905: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5329 ccf4f37f-ae1f-43c0-a922-3c02a87d4ee5 596217 0 2022-08-03 06:18:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 3 06:18:40.906: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5329 ccf4f37f-ae1f-43c0-a922-3c02a87d4ee5 596218 0 2022-08-03 06:18:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:18:40.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-5329" for this suite. + +• [SLOW TEST:10.284 seconds] +[sig-api-machinery] Watchers +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":346,"completed":11,"skipped":157,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:18:40.924: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Aug 3 06:18:41.003: INFO: Waiting up to 5m0s for pod "downward-api-757e0f5b-145d-44b0-ab9a-a6a5dfb087a5" in namespace "downward-api-4770" to be "Succeeded or Failed" +Aug 3 06:18:41.010: INFO: Pod "downward-api-757e0f5b-145d-44b0-ab9a-a6a5dfb087a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.577024ms +Aug 3 06:18:43.069: INFO: Pod "downward-api-757e0f5b-145d-44b0-ab9a-a6a5dfb087a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065435826s +Aug 3 06:18:45.078: INFO: Pod "downward-api-757e0f5b-145d-44b0-ab9a-a6a5dfb087a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075022018s +Aug 3 06:18:47.087: INFO: Pod "downward-api-757e0f5b-145d-44b0-ab9a-a6a5dfb087a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.084122843s +STEP: Saw pod success +Aug 3 06:18:47.087: INFO: Pod "downward-api-757e0f5b-145d-44b0-ab9a-a6a5dfb087a5" satisfied condition "Succeeded or Failed" +Aug 3 06:18:47.094: INFO: Trying to get logs from node dce-10-6-213-50 pod downward-api-757e0f5b-145d-44b0-ab9a-a6a5dfb087a5 container dapi-container: +STEP: delete the pod +Aug 3 06:18:47.125: INFO: Waiting for pod downward-api-757e0f5b-145d-44b0-ab9a-a6a5dfb087a5 to disappear +Aug 3 06:18:47.131: INFO: Pod downward-api-757e0f5b-145d-44b0-ab9a-a6a5dfb087a5 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:18:47.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4770" for this suite. + +• [SLOW TEST:6.230 seconds] +[sig-node] Downward API +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":346,"completed":12,"skipped":188,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:18:47.155: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 +[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Aug 3 06:18:47.238: INFO: The status of Pod pod-update-activedeadlineseconds-5cccfa79-5055-4375-8650-b7867b1e4e30 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:18:49.258: INFO: The status of Pod pod-update-activedeadlineseconds-5cccfa79-5055-4375-8650-b7867b1e4e30 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:18:51.250: INFO: The status of Pod pod-update-activedeadlineseconds-5cccfa79-5055-4375-8650-b7867b1e4e30 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:18:53.254: INFO: The status of Pod pod-update-activedeadlineseconds-5cccfa79-5055-4375-8650-b7867b1e4e30 is Running (Ready = true) +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Aug 3 06:18:53.789: INFO: Successfully updated pod "pod-update-activedeadlineseconds-5cccfa79-5055-4375-8650-b7867b1e4e30" +Aug 3 06:18:53.789: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-5cccfa79-5055-4375-8650-b7867b1e4e30" in namespace "pods-4861" to be "terminated due to deadline exceeded" +Aug 3 06:18:53.801: INFO: Pod "pod-update-activedeadlineseconds-5cccfa79-5055-4375-8650-b7867b1e4e30": Phase="Running", Reason="", readiness=true. Elapsed: 12.10732ms +Aug 3 06:18:55.817: INFO: Pod "pod-update-activedeadlineseconds-5cccfa79-5055-4375-8650-b7867b1e4e30": Phase="Failed", Reason="DeadlineExceeded", readiness=true. Elapsed: 2.02771336s +Aug 3 06:18:55.817: INFO: Pod "pod-update-activedeadlineseconds-5cccfa79-5055-4375-8650-b7867b1e4e30" satisfied condition "terminated due to deadline exceeded" +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:18:55.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-4861" for this suite. + +• [SLOW TEST:8.681 seconds] +[sig-node] Pods +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":346,"completed":13,"skipped":250,"failed":0} +SSSSS +------------------------------ +[sig-network] EndpointSliceMirroring + should mirror a custom Endpoints resource through create update and delete [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:18:55.836: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename endpointslicemirroring +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 +[It] should mirror a custom Endpoints resource through create update and delete [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: mirroring a new custom Endpoint +Aug 3 06:18:55.963: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 +STEP: mirroring an update to a custom Endpoint +Aug 3 06:18:57.989: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 +STEP: mirroring deletion of a custom Endpoint +Aug 3 06:19:00.028: INFO: Waiting for 0 EndpointSlices to exist, got 1 +[AfterEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:19:02.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslicemirroring-3782" for this suite. + +• [SLOW TEST:6.230 seconds] +[sig-network] EndpointSliceMirroring +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should mirror a custom Endpoints resource through create update and delete [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":346,"completed":14,"skipped":255,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:19:02.067: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename cronjob +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a suspended cronjob +STEP: Ensuring no jobs are scheduled +STEP: Ensuring no job exists by listing jobs explicitly +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:24:02.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-4800" for this suite. + +• [SLOW TEST:300.137 seconds] +[sig-apps] CronJob +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":346,"completed":15,"skipped":280,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:24:02.205: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Aug 3 06:24:02.282: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7efcaca2-f05c-4c5f-9710-48b905c7f0d3" in namespace "projected-4956" to be "Succeeded or Failed" +Aug 3 06:24:02.292: INFO: Pod "downwardapi-volume-7efcaca2-f05c-4c5f-9710-48b905c7f0d3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.797058ms +Aug 3 06:24:04.311: INFO: Pod "downwardapi-volume-7efcaca2-f05c-4c5f-9710-48b905c7f0d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028147937s +Aug 3 06:24:06.321: INFO: Pod "downwardapi-volume-7efcaca2-f05c-4c5f-9710-48b905c7f0d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038106464s +STEP: Saw pod success +Aug 3 06:24:06.321: INFO: Pod "downwardapi-volume-7efcaca2-f05c-4c5f-9710-48b905c7f0d3" satisfied condition "Succeeded or Failed" +Aug 3 06:24:06.325: INFO: Trying to get logs from node dce-10-6-213-50 pod downwardapi-volume-7efcaca2-f05c-4c5f-9710-48b905c7f0d3 container client-container: +STEP: delete the pod +Aug 3 06:24:06.381: INFO: Waiting for pod downwardapi-volume-7efcaca2-f05c-4c5f-9710-48b905c7f0d3 to disappear +Aug 3 06:24:06.386: INFO: Pod downwardapi-volume-7efcaca2-f05c-4c5f-9710-48b905c7f0d3 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:24:06.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4956" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":16,"skipped":305,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:24:06.409: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-downwardapi-9p7q +STEP: Creating a pod to test atomic-volume-subpath +Aug 3 06:24:06.514: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-9p7q" in namespace "subpath-1261" to be "Succeeded or Failed" +Aug 3 06:24:06.527: INFO: Pod "pod-subpath-test-downwardapi-9p7q": Phase="Pending", Reason="", readiness=false. Elapsed: 12.693412ms +Aug 3 06:24:08.542: INFO: Pod "pod-subpath-test-downwardapi-9p7q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02772507s +Aug 3 06:24:10.560: INFO: Pod "pod-subpath-test-downwardapi-9p7q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045807162s +Aug 3 06:24:12.573: INFO: Pod "pod-subpath-test-downwardapi-9p7q": Phase="Running", Reason="", readiness=true. Elapsed: 6.058247255s +Aug 3 06:24:14.583: INFO: Pod "pod-subpath-test-downwardapi-9p7q": Phase="Running", Reason="", readiness=true. Elapsed: 8.068219878s +Aug 3 06:24:16.593: INFO: Pod "pod-subpath-test-downwardapi-9p7q": Phase="Running", Reason="", readiness=true. Elapsed: 10.078422608s +Aug 3 06:24:18.599: INFO: Pod "pod-subpath-test-downwardapi-9p7q": Phase="Running", Reason="", readiness=true. Elapsed: 12.084799621s +Aug 3 06:24:20.610: INFO: Pod "pod-subpath-test-downwardapi-9p7q": Phase="Running", Reason="", readiness=true. Elapsed: 14.094971949s +Aug 3 06:24:22.625: INFO: Pod "pod-subpath-test-downwardapi-9p7q": Phase="Running", Reason="", readiness=true. Elapsed: 16.110645436s +Aug 3 06:24:24.637: INFO: Pod "pod-subpath-test-downwardapi-9p7q": Phase="Running", Reason="", readiness=true. Elapsed: 18.122161309s +Aug 3 06:24:26.649: INFO: Pod "pod-subpath-test-downwardapi-9p7q": Phase="Running", Reason="", readiness=true. Elapsed: 20.134104143s +Aug 3 06:24:28.657: INFO: Pod "pod-subpath-test-downwardapi-9p7q": Phase="Running", Reason="", readiness=true. Elapsed: 22.142249033s +Aug 3 06:24:30.670: INFO: Pod "pod-subpath-test-downwardapi-9p7q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.155710132s +STEP: Saw pod success +Aug 3 06:24:30.670: INFO: Pod "pod-subpath-test-downwardapi-9p7q" satisfied condition "Succeeded or Failed" +Aug 3 06:24:30.676: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-subpath-test-downwardapi-9p7q container test-container-subpath-downwardapi-9p7q: +STEP: delete the pod +Aug 3 06:24:30.713: INFO: Waiting for pod pod-subpath-test-downwardapi-9p7q to disappear +Aug 3 06:24:30.719: INFO: Pod pod-subpath-test-downwardapi-9p7q no longer exists +STEP: Deleting pod pod-subpath-test-downwardapi-9p7q +Aug 3 06:24:30.720: INFO: Deleting pod "pod-subpath-test-downwardapi-9p7q" in namespace "subpath-1261" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:24:30.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-1261" for this suite. + +• [SLOW TEST:24.354 seconds] +[sig-storage] Subpath +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","total":346,"completed":17,"skipped":312,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should have Endpoints and EndpointSlices pointing to API Server [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:24:30.763: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename endpointslice +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 06:24:30.847: INFO: Endpoints addresses: [10.6.213.10 10.6.213.20 10.6.213.30] , ports: [16443] +Aug 3 06:24:30.847: INFO: EndpointSlices addresses: [10.6.213.10 10.6.213.20 10.6.213.30] , ports: [16443] +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:24:30.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-5204" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":346,"completed":18,"skipped":341,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop complex daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:24:30.870: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:143 +[It] should run and stop complex daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 06:24:30.964: INFO: Creating daemon "daemon-set" with a node selector +STEP: Initially, daemon pods should not be running on any nodes. +Aug 3 06:24:30.984: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:24:30.984: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +STEP: Change node label to blue, check that daemon pod is launched. +Aug 3 06:24:31.042: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:24:31.042: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 06:24:32.054: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:24:32.054: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 06:24:33.052: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:24:33.052: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 06:24:34.059: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:24:34.060: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 06:24:35.052: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:24:35.052: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 06:24:36.052: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 3 06:24:36.052: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set +STEP: Update the node label to green, and wait for daemons to be unscheduled +Aug 3 06:24:36.087: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 3 06:24:36.087: INFO: Number of running nodes: 0, number of available pods: 1 in daemonset daemon-set +Aug 3 06:24:37.098: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:24:37.098: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate +Aug 3 06:24:37.127: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:24:37.127: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 06:24:38.146: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:24:38.146: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 06:24:39.138: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:24:39.138: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 06:24:40.137: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:24:40.137: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 06:24:41.139: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:24:41.139: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 06:24:42.143: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:24:42.143: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 06:24:43.136: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 3 06:24:43.136: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:109 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-237, will wait for the garbage collector to delete the pods +Aug 3 06:24:43.218: INFO: Deleting DaemonSet.extensions daemon-set took: 14.737222ms +Aug 3 06:24:43.318: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.24467ms +Aug 3 06:24:47.930: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:24:47.930: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Aug 3 06:24:47.935: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"597672"},"items":null} + +Aug 3 06:24:47.939: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"597672"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:24:47.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-237" for this suite. + +• [SLOW TEST:17.185 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should run and stop complex daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":346,"completed":19,"skipped":351,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:24:48.056: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-0e82c9a6-826e-455c-abe4-1ec4aeebe3c9 +STEP: Creating a pod to test consume secrets +Aug 3 06:24:48.135: INFO: Waiting up to 5m0s for pod "pod-secrets-ebcccfdc-5813-4533-9752-d9d035c37a9f" in namespace "secrets-2386" to be "Succeeded or Failed" +Aug 3 06:24:48.142: INFO: Pod "pod-secrets-ebcccfdc-5813-4533-9752-d9d035c37a9f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.898953ms +Aug 3 06:24:50.151: INFO: Pod "pod-secrets-ebcccfdc-5813-4533-9752-d9d035c37a9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015983944s +Aug 3 06:24:52.162: INFO: Pod "pod-secrets-ebcccfdc-5813-4533-9752-d9d035c37a9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027534582s +Aug 3 06:24:54.178: INFO: Pod "pod-secrets-ebcccfdc-5813-4533-9752-d9d035c37a9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043020167s +STEP: Saw pod success +Aug 3 06:24:54.178: INFO: Pod "pod-secrets-ebcccfdc-5813-4533-9752-d9d035c37a9f" satisfied condition "Succeeded or Failed" +Aug 3 06:24:54.185: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-secrets-ebcccfdc-5813-4533-9752-d9d035c37a9f container secret-env-test: +STEP: delete the pod +Aug 3 06:24:54.228: INFO: Waiting for pod pod-secrets-ebcccfdc-5813-4533-9752-d9d035c37a9f to disappear +Aug 3 06:24:54.242: INFO: Pod pod-secrets-ebcccfdc-5813-4533-9752-d9d035c37a9f no longer exists +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:24:54.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-2386" for this suite. + +• [SLOW TEST:6.213 seconds] +[sig-node] Secrets +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":346,"completed":20,"skipped":397,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group but different versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:24:54.269: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of same group but different versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation +Aug 3 06:24:54.346: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation +Aug 3 06:25:12.903: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 06:25:16.541: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:25:30.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-6590" for this suite. + +• [SLOW TEST:36.744 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of same group but different versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":346,"completed":21,"skipped":424,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should run through the lifecycle of Pods and PodStatus [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:25:31.014: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 +[It] should run through the lifecycle of Pods and PodStatus [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Pod with a static label +STEP: watching for Pod to be ready +Aug 3 06:25:31.105: INFO: observed Pod pod-test in namespace pods-2828 in phase Pending with labels: map[test-pod-static:true] & conditions [] +Aug 3 06:25:31.114: INFO: observed Pod pod-test in namespace pods-2828 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 06:25:31 +0000 UTC }] +Aug 3 06:25:31.159: INFO: observed Pod pod-test in namespace pods-2828 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 06:25:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 06:25:31 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 06:25:31 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 06:25:31 +0000 UTC }] +Aug 3 06:25:33.007: INFO: observed Pod pod-test in namespace pods-2828 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 06:25:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 06:25:31 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 06:25:31 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 06:25:31 +0000 UTC }] +Aug 3 06:25:33.346: INFO: observed Pod pod-test in namespace pods-2828 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 06:25:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 06:25:31 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 06:25:31 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 06:25:31 +0000 UTC }] +Aug 3 06:25:35.237: INFO: Found Pod pod-test in namespace pods-2828 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 06:25:31 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 06:25:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 06:25:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 06:25:31 +0000 UTC }] +STEP: patching the Pod with a new Label and updated data +Aug 3 06:25:35.259: INFO: observed event type ADDED +STEP: getting the Pod and ensuring that it's patched +STEP: replacing the Pod's status Ready condition to False +STEP: check the Pod again to ensure its Ready conditions are False +STEP: deleting the Pod via a Collection with a LabelSelector +STEP: watching for the Pod to be deleted +Aug 3 06:25:35.301: INFO: observed event type ADDED +Aug 3 06:25:35.301: INFO: observed event type MODIFIED +Aug 3 06:25:35.301: INFO: observed event type MODIFIED +Aug 3 06:25:35.301: INFO: observed event type MODIFIED +Aug 3 06:25:35.302: INFO: observed event type MODIFIED +Aug 3 06:25:35.302: INFO: observed event type MODIFIED +Aug 3 06:25:35.302: INFO: observed event type MODIFIED +Aug 3 06:25:35.302: INFO: observed event type MODIFIED +Aug 3 06:25:35.302: INFO: observed event type MODIFIED +Aug 3 06:25:37.271: INFO: observed event type MODIFIED +Aug 3 06:25:39.956: INFO: observed event type MODIFIED +Aug 3 06:25:39.988: INFO: observed event type MODIFIED +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:25:40.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-2828" for this suite. + +• [SLOW TEST:9.019 seconds] +[sig-node] Pods +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should run through the lifecycle of Pods and PodStatus [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":346,"completed":22,"skipped":455,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:25:40.034: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Aug 3 06:25:40.155: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cae67810-05a3-4f15-85ea-b7d771f15e92" in namespace "projected-9210" to be "Succeeded or Failed" +Aug 3 06:25:40.180: INFO: Pod "downwardapi-volume-cae67810-05a3-4f15-85ea-b7d771f15e92": Phase="Pending", Reason="", readiness=false. Elapsed: 25.106435ms +Aug 3 06:25:42.195: INFO: Pod "downwardapi-volume-cae67810-05a3-4f15-85ea-b7d771f15e92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040192709s +Aug 3 06:25:44.207: INFO: Pod "downwardapi-volume-cae67810-05a3-4f15-85ea-b7d771f15e92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052066991s +Aug 3 06:25:46.217: INFO: Pod "downwardapi-volume-cae67810-05a3-4f15-85ea-b7d771f15e92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061813544s +STEP: Saw pod success +Aug 3 06:25:46.217: INFO: Pod "downwardapi-volume-cae67810-05a3-4f15-85ea-b7d771f15e92" satisfied condition "Succeeded or Failed" +Aug 3 06:25:46.225: INFO: Trying to get logs from node dce-10-6-213-50 pod downwardapi-volume-cae67810-05a3-4f15-85ea-b7d771f15e92 container client-container: +STEP: delete the pod +Aug 3 06:25:46.261: INFO: Waiting for pod downwardapi-volume-cae67810-05a3-4f15-85ea-b7d771f15e92 to disappear +Aug 3 06:25:46.269: INFO: Pod downwardapi-volume-cae67810-05a3-4f15-85ea-b7d771f15e92 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:25:46.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9210" for this suite. + +• [SLOW TEST:6.276 seconds] +[sig-storage] Projected downwardAPI +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":23,"skipped":465,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should delete a collection of services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:25:46.310: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should delete a collection of services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a collection of services +Aug 3 06:25:46.408: INFO: Creating e2e-svc-a-2tfnb +Aug 3 06:25:46.443: INFO: Creating e2e-svc-b-bp8p6 +Aug 3 06:25:46.479: INFO: Creating e2e-svc-c-tbbmb +STEP: deleting service collection +Aug 3 06:25:46.637: INFO: Collection of services has been deleted +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:25:46.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9101" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should delete a collection of services [Conformance]","total":346,"completed":24,"skipped":478,"failed":0} +SSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:25:46.684: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Service +STEP: Creating a NodePort Service +STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota +STEP: Ensuring resource quota status captures service creation +STEP: Deleting Services +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:25:58.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-4925" for this suite. + +• [SLOW TEST:11.431 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":346,"completed":25,"skipped":481,"failed":0} +S +------------------------------ +[sig-network] Services + should serve a basic endpoint from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:25:58.115: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should serve a basic endpoint from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service endpoint-test2 in namespace services-2175 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2175 to expose endpoints map[] +Aug 3 06:25:58.242: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found +Aug 3 06:25:59.261: INFO: successfully validated that service endpoint-test2 in namespace services-2175 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-2175 +Aug 3 06:25:59.290: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:26:01.306: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:26:03.307: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2175 to expose endpoints map[pod1:[80]] +Aug 3 06:26:03.332: INFO: successfully validated that service endpoint-test2 in namespace services-2175 exposes endpoints map[pod1:[80]] +STEP: Checking if the Service forwards traffic to pod1 +Aug 3 06:26:03.332: INFO: Creating new exec pod +Aug 3 06:26:08.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-2175 exec execpodcg8kz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Aug 3 06:26:08.712: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Aug 3 06:26:08.712: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 06:26:08.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-2175 exec execpodcg8kz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.106.205 80' +Aug 3 06:26:08.984: INFO: stderr: "+ nc -v -t -w 2 172.31.106.205 80\n+ echo hostName\nConnection to 172.31.106.205 80 port [tcp/http] succeeded!\n" +Aug 3 06:26:08.984: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Creating pod pod2 in namespace services-2175 +Aug 3 06:26:09.007: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:26:11.017: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:26:14.136: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:26:15.291: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:26:17.019: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2175 to expose endpoints map[pod1:[80] pod2:[80]] +Aug 3 06:26:17.069: INFO: successfully validated that service endpoint-test2 in namespace services-2175 exposes endpoints map[pod1:[80] pod2:[80]] +STEP: Checking if the Service forwards traffic to pod1 and pod2 +Aug 3 06:26:18.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-2175 exec execpodcg8kz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Aug 3 06:26:18.370: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Aug 3 06:26:18.370: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 06:26:18.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-2175 exec execpodcg8kz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.106.205 80' +Aug 3 06:26:18.630: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.106.205 80\nConnection to 172.31.106.205 80 port [tcp/http] succeeded!\n" +Aug 3 06:26:18.630: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod1 in namespace services-2175 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2175 to expose endpoints map[pod2:[80]] +Aug 3 06:26:18.693: INFO: successfully validated that service endpoint-test2 in namespace services-2175 exposes endpoints map[pod2:[80]] +STEP: Checking if the Service forwards traffic to pod2 +Aug 3 06:26:19.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-2175 exec execpodcg8kz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Aug 3 06:26:19.987: INFO: rc: 1 +Aug 3 06:26:19.987: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-2175 exec execpodcg8kz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80: +Command stdout: + +stderr: ++ echo hostName ++ nc -v -t -w 2 endpoint-test2 80 +nc: connect to endpoint-test2 port 80 (tcp) failed: Connection refused +command terminated with exit code 1 + +error: +exit status 1 +Retrying... +Aug 3 06:26:20.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-2175 exec execpodcg8kz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Aug 3 06:26:24.347: INFO: rc: 1 +Aug 3 06:26:24.347: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-2175 exec execpodcg8kz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80: +Command stdout: + +stderr: ++ echo hostName ++ nc -v -t -w 2 endpoint-test2 80 +nc: connect to endpoint-test2 port 80 (tcp) timed out: Operation in progress +command terminated with exit code 1 + +error: +exit status 1 +Retrying... +Aug 3 06:26:24.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-2175 exec execpodcg8kz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Aug 3 06:26:25.258: INFO: stderr: "+ nc -v -t -w 2 endpoint-test2 80\n+ echo hostName\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Aug 3 06:26:25.258: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 06:26:25.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-2175 exec execpodcg8kz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.106.205 80' +Aug 3 06:26:25.522: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.106.205 80\nConnection to 172.31.106.205 80 port [tcp/http] succeeded!\n" +Aug 3 06:26:25.522: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod2 in namespace services-2175 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2175 to expose endpoints map[] +Aug 3 06:26:25.586: INFO: successfully validated that service endpoint-test2 in namespace services-2175 exposes endpoints map[] +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:26:25.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-2175" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:27.578 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should serve a basic endpoint from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":346,"completed":26,"skipped":482,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:26:25.693: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 06:26:29.809: INFO: Deleting pod "var-expansion-f4808dbc-ec7f-4b00-8add-0c683fc0452d" in namespace "var-expansion-6097" +Aug 3 06:26:29.849: INFO: Wait up to 5m0s for pod "var-expansion-f4808dbc-ec7f-4b00-8add-0c683fc0452d" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:26:37.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-6097" for this suite. + +• [SLOW TEST:12.202 seconds] +[sig-node] Variable Expansion +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":346,"completed":27,"skipped":495,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should allow opting out of API token automount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:26:37.896: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow opting out of API token automount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting the auto-created API token +Aug 3 06:26:38.524: INFO: created pod pod-service-account-defaultsa +Aug 3 06:26:38.524: INFO: pod pod-service-account-defaultsa service account token volume mount: true +Aug 3 06:26:38.538: INFO: created pod pod-service-account-mountsa +Aug 3 06:26:38.538: INFO: pod pod-service-account-mountsa service account token volume mount: true +Aug 3 06:26:38.551: INFO: created pod pod-service-account-nomountsa +Aug 3 06:26:38.552: INFO: pod pod-service-account-nomountsa service account token volume mount: false +Aug 3 06:26:38.572: INFO: created pod pod-service-account-defaultsa-mountspec +Aug 3 06:26:38.572: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true +Aug 3 06:26:38.586: INFO: created pod pod-service-account-mountsa-mountspec +Aug 3 06:26:38.586: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true +Aug 3 06:26:38.596: INFO: created pod pod-service-account-nomountsa-mountspec +Aug 3 06:26:38.596: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true +Aug 3 06:26:38.620: INFO: created pod pod-service-account-defaultsa-nomountspec +Aug 3 06:26:38.620: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false +Aug 3 06:26:38.649: INFO: created pod pod-service-account-mountsa-nomountspec +Aug 3 06:26:38.649: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false +Aug 3 06:26:38.663: INFO: created pod pod-service-account-nomountsa-nomountspec +Aug 3 06:26:38.663: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:26:38.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-2791" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":346,"completed":28,"skipped":597,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a secret. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:26:38.701: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a secret. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Discovering how many secrets are in namespace by default +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Secret +STEP: Ensuring resource quota status captures secret creation +STEP: Deleting a secret +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:26:55.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-7290" for this suite. + +• [SLOW TEST:17.245 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a secret. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":346,"completed":29,"skipped":626,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:26:55.948: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-9879 +Aug 3 06:26:56.034: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:26:58.045: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) +Aug 3 06:26:58.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-9879 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' +Aug 3 06:26:58.412: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" +Aug 3 06:26:58.412: INFO: stdout: "iptables" +Aug 3 06:26:58.412: INFO: proxyMode: iptables +Aug 3 06:26:58.435: INFO: Waiting for pod kube-proxy-mode-detector to disappear +Aug 3 06:26:58.444: INFO: Pod kube-proxy-mode-detector no longer exists +STEP: creating service affinity-clusterip-timeout in namespace services-9879 +STEP: creating replication controller affinity-clusterip-timeout in namespace services-9879 +I0803 06:26:58.474599 21 runners.go:193] Created replication controller with name: affinity-clusterip-timeout, namespace: services-9879, replica count: 3 +I0803 06:27:01.525111 21 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0803 06:27:04.526552 21 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 3 06:27:04.552: INFO: Creating new exec pod +Aug 3 06:27:09.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-9879 exec execpod-affinity9mmkv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' +Aug 3 06:27:09.900: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" +Aug 3 06:27:09.900: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 06:27:09.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-9879 exec execpod-affinity9mmkv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.187.21 80' +Aug 3 06:27:10.193: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.187.21 80\nConnection to 172.31.187.21 80 port [tcp/http] succeeded!\n" +Aug 3 06:27:10.193: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 06:27:10.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-9879 exec execpod-affinity9mmkv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.31.187.21:80/ ; done' +Aug 3 06:27:10.627: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.187.21:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.187.21:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.187.21:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.187.21:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.187.21:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.187.21:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.187.21:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.187.21:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.187.21:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.187.21:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.187.21:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.187.21:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.187.21:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.187.21:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.187.21:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.187.21:80/\n" +Aug 3 06:27:10.627: INFO: stdout: "\naffinity-clusterip-timeout-585gk\naffinity-clusterip-timeout-585gk\naffinity-clusterip-timeout-585gk\naffinity-clusterip-timeout-585gk\naffinity-clusterip-timeout-585gk\naffinity-clusterip-timeout-585gk\naffinity-clusterip-timeout-585gk\naffinity-clusterip-timeout-585gk\naffinity-clusterip-timeout-585gk\naffinity-clusterip-timeout-585gk\naffinity-clusterip-timeout-585gk\naffinity-clusterip-timeout-585gk\naffinity-clusterip-timeout-585gk\naffinity-clusterip-timeout-585gk\naffinity-clusterip-timeout-585gk\naffinity-clusterip-timeout-585gk" +Aug 3 06:27:10.627: INFO: Received response from host: affinity-clusterip-timeout-585gk +Aug 3 06:27:10.627: INFO: Received response from host: affinity-clusterip-timeout-585gk +Aug 3 06:27:10.627: INFO: Received response from host: affinity-clusterip-timeout-585gk +Aug 3 06:27:10.627: INFO: Received response from host: affinity-clusterip-timeout-585gk +Aug 3 06:27:10.627: INFO: Received response from host: affinity-clusterip-timeout-585gk +Aug 3 06:27:10.627: INFO: Received response from host: affinity-clusterip-timeout-585gk +Aug 3 06:27:10.627: INFO: Received response from host: affinity-clusterip-timeout-585gk +Aug 3 06:27:10.627: INFO: Received response from host: affinity-clusterip-timeout-585gk +Aug 3 06:27:10.627: INFO: Received response from host: affinity-clusterip-timeout-585gk +Aug 3 06:27:10.627: INFO: Received response from host: affinity-clusterip-timeout-585gk +Aug 3 06:27:10.627: INFO: Received response from host: affinity-clusterip-timeout-585gk +Aug 3 06:27:10.627: INFO: Received response from host: affinity-clusterip-timeout-585gk +Aug 3 06:27:10.627: INFO: Received response from host: affinity-clusterip-timeout-585gk +Aug 3 06:27:10.627: INFO: Received response from host: affinity-clusterip-timeout-585gk +Aug 3 06:27:10.627: INFO: Received response from host: affinity-clusterip-timeout-585gk +Aug 3 06:27:10.627: INFO: Received response from host: affinity-clusterip-timeout-585gk +Aug 3 06:27:10.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-9879 exec execpod-affinity9mmkv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.31.187.21:80/' +Aug 3 06:27:10.887: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.31.187.21:80/\n" +Aug 3 06:27:10.887: INFO: stdout: "affinity-clusterip-timeout-585gk" +Aug 3 06:27:30.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-9879 exec execpod-affinity9mmkv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.31.187.21:80/' +Aug 3 06:27:31.180: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.31.187.21:80/\n" +Aug 3 06:27:31.180: INFO: stdout: "affinity-clusterip-timeout-f4mgp" +Aug 3 06:27:31.180: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-9879, will wait for the garbage collector to delete the pods +Aug 3 06:27:31.270: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 12.274109ms +Aug 3 06:27:31.370: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.160342ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:27:35.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9879" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:39.683 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":30,"skipped":644,"failed":0} +SSSS +------------------------------ +[sig-apps] ReplicationController + should surface a failure condition on a common issue like exceeded quota [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:27:35.631: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should surface a failure condition on a common issue like exceeded quota [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 06:27:35.720: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace +STEP: Creating rc "condition-test" that asks for more than the allowed pod quota +STEP: Checking rc "condition-test" has the desired failure condition set +STEP: Scaling down rc "condition-test" to satisfy pod quota +Aug 3 06:27:36.795: INFO: Updating replication controller "condition-test" +STEP: Checking rc "condition-test" has no failure condition set +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:27:37.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-339" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":346,"completed":31,"skipped":648,"failed":0} + +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:27:37.857: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-secret-b8nl +STEP: Creating a pod to test atomic-volume-subpath +Aug 3 06:27:37.990: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-b8nl" in namespace "subpath-4529" to be "Succeeded or Failed" +Aug 3 06:27:37.996: INFO: Pod "pod-subpath-test-secret-b8nl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031927ms +Aug 3 06:27:40.013: INFO: Pod "pod-subpath-test-secret-b8nl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023205055s +Aug 3 06:27:42.027: INFO: Pod "pod-subpath-test-secret-b8nl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037549173s +Aug 3 06:27:44.035: INFO: Pod "pod-subpath-test-secret-b8nl": Phase="Running", Reason="", readiness=true. Elapsed: 6.045700862s +Aug 3 06:27:46.052: INFO: Pod "pod-subpath-test-secret-b8nl": Phase="Running", Reason="", readiness=true. Elapsed: 8.062270116s +Aug 3 06:27:48.069: INFO: Pod "pod-subpath-test-secret-b8nl": Phase="Running", Reason="", readiness=true. Elapsed: 10.079307571s +Aug 3 06:27:50.084: INFO: Pod "pod-subpath-test-secret-b8nl": Phase="Running", Reason="", readiness=true. Elapsed: 12.094601761s +Aug 3 06:27:52.099: INFO: Pod "pod-subpath-test-secret-b8nl": Phase="Running", Reason="", readiness=true. Elapsed: 14.108875126s +Aug 3 06:27:54.112: INFO: Pod "pod-subpath-test-secret-b8nl": Phase="Running", Reason="", readiness=true. Elapsed: 16.12174617s +Aug 3 06:27:56.125: INFO: Pod "pod-subpath-test-secret-b8nl": Phase="Running", Reason="", readiness=true. Elapsed: 18.13569285s +Aug 3 06:27:58.138: INFO: Pod "pod-subpath-test-secret-b8nl": Phase="Running", Reason="", readiness=true. Elapsed: 20.147904766s +Aug 3 06:28:00.147: INFO: Pod "pod-subpath-test-secret-b8nl": Phase="Running", Reason="", readiness=true. Elapsed: 22.157538923s +Aug 3 06:28:02.159: INFO: Pod "pod-subpath-test-secret-b8nl": Phase="Running", Reason="", readiness=true. Elapsed: 24.169249536s +Aug 3 06:28:04.177: INFO: Pod "pod-subpath-test-secret-b8nl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.187396311s +STEP: Saw pod success +Aug 3 06:28:04.177: INFO: Pod "pod-subpath-test-secret-b8nl" satisfied condition "Succeeded or Failed" +Aug 3 06:28:04.184: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-subpath-test-secret-b8nl container test-container-subpath-secret-b8nl: +STEP: delete the pod +Aug 3 06:28:04.247: INFO: Waiting for pod pod-subpath-test-secret-b8nl to disappear +Aug 3 06:28:04.257: INFO: Pod pod-subpath-test-secret-b8nl no longer exists +STEP: Deleting pod pod-subpath-test-secret-b8nl +Aug 3 06:28:04.257: INFO: Deleting pod "pod-subpath-test-secret-b8nl" in namespace "subpath-4529" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:28:04.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-4529" for this suite. + +• [SLOW TEST:26.431 seconds] +[sig-storage] Subpath +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance]","total":346,"completed":32,"skipped":648,"failed":0} +SSSSSSS +------------------------------ +[sig-apps] ReplicationController + should test the lifecycle of a ReplicationController [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:28:04.289: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should test the lifecycle of a ReplicationController [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a ReplicationController +STEP: waiting for RC to be added +STEP: waiting for available Replicas +STEP: patching ReplicationController +STEP: waiting for RC to be modified +STEP: patching ReplicationController status +STEP: waiting for RC to be modified +STEP: waiting for available Replicas +STEP: fetching ReplicationController status +STEP: patching ReplicationController scale +STEP: waiting for RC to be modified +STEP: waiting for ReplicationController's scale to be the max amount +STEP: fetching ReplicationController; ensuring that it's patched +STEP: updating ReplicationController status +STEP: waiting for RC to be modified +STEP: listing all ReplicationControllers +STEP: checking that ReplicationController has expected values +STEP: deleting ReplicationControllers by collection +STEP: waiting for ReplicationController to have a DELETED watchEvent +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:28:13.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-8723" for this suite. + +• [SLOW TEST:8.857 seconds] +[sig-apps] ReplicationController +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should test the lifecycle of a ReplicationController [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":346,"completed":33,"skipped":655,"failed":0} +SSSSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:28:13.146: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a watch on configmaps +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: closing the watch once it receives two notifications +Aug 3 06:28:13.229: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6965 6006ff1b-922e-4acd-91c7-77b99e5716ab 599267 0 2022-08-03 06:28:13 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 3 06:28:13.229: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6965 6006ff1b-922e-4acd-91c7-77b99e5716ab 599268 0 2022-08-03 06:28:13 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time, while the watch is closed +STEP: creating a new watch on configmaps from the last resource version observed by the first watch +STEP: deleting the configmap +STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed +Aug 3 06:28:13.248: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6965 6006ff1b-922e-4acd-91c7-77b99e5716ab 599269 0 2022-08-03 06:28:13 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 3 06:28:13.249: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6965 6006ff1b-922e-4acd-91c7-77b99e5716ab 599270 0 2022-08-03 06:28:13 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:28:13.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-6965" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":346,"completed":34,"skipped":661,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:28:13.267: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-1325 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Aug 3 06:28:13.321: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Aug 3 06:28:13.367: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:28:15.380: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:28:17.376: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 06:28:19.378: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 06:28:21.376: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 06:28:23.379: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 06:28:25.380: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 06:28:27.378: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 06:28:29.379: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 06:28:31.377: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 06:28:33.384: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 06:28:35.381: INFO: The status of Pod netserver-0 is Running (Ready = true) +Aug 3 06:28:35.395: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Aug 3 06:28:41.433: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Aug 3 06:28:41.433: INFO: Breadth first check of 172.29.31.98 on host 10.6.213.40... +Aug 3 06:28:41.439: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.29.175.18:9080/dial?request=hostname&protocol=http&host=172.29.31.98&port=8083&tries=1'] Namespace:pod-network-test-1325 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 06:28:41.439: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 06:28:41.440: INFO: ExecWithOptions: Clientset creation +Aug 3 06:28:41.440: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/pod-network-test-1325/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.29.175.18%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D172.29.31.98%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) +Aug 3 06:28:41.610: INFO: Waiting for responses: map[] +Aug 3 06:28:41.610: INFO: reached 172.29.31.98 after 0/1 tries +Aug 3 06:28:41.610: INFO: Breadth first check of 172.29.175.11 on host 10.6.213.50... +Aug 3 06:28:41.616: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.29.175.18:9080/dial?request=hostname&protocol=http&host=172.29.175.11&port=8083&tries=1'] Namespace:pod-network-test-1325 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 06:28:41.616: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 06:28:41.618: INFO: ExecWithOptions: Clientset creation +Aug 3 06:28:41.618: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/pod-network-test-1325/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.29.175.18%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D172.29.175.11%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) +Aug 3 06:28:41.787: INFO: Waiting for responses: map[] +Aug 3 06:28:41.787: INFO: reached 172.29.175.11 after 0/1 tries +Aug 3 06:28:41.787: INFO: Going to retry 0 out of 2 pods.... +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:28:41.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-1325" for this suite. + +• [SLOW TEST:28.557 seconds] +[sig-network] Networking +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 + Granular Checks: Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 + should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":346,"completed":35,"skipped":676,"failed":0} +SS +------------------------------ +[sig-storage] Projected downwardAPI + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:28:41.825: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Aug 3 06:28:41.934: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3dd99acb-089b-4165-94df-9b966486599f" in namespace "projected-2569" to be "Succeeded or Failed" +Aug 3 06:28:41.972: INFO: Pod "downwardapi-volume-3dd99acb-089b-4165-94df-9b966486599f": Phase="Pending", Reason="", readiness=false. Elapsed: 36.790346ms +Aug 3 06:28:43.986: INFO: Pod "downwardapi-volume-3dd99acb-089b-4165-94df-9b966486599f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051258734s +Aug 3 06:28:45.996: INFO: Pod "downwardapi-volume-3dd99acb-089b-4165-94df-9b966486599f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061492938s +Aug 3 06:28:48.013: INFO: Pod "downwardapi-volume-3dd99acb-089b-4165-94df-9b966486599f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.078378711s +STEP: Saw pod success +Aug 3 06:28:48.013: INFO: Pod "downwardapi-volume-3dd99acb-089b-4165-94df-9b966486599f" satisfied condition "Succeeded or Failed" +Aug 3 06:28:48.020: INFO: Trying to get logs from node dce-10-6-213-50 pod downwardapi-volume-3dd99acb-089b-4165-94df-9b966486599f container client-container: +STEP: delete the pod +Aug 3 06:28:48.085: INFO: Waiting for pod downwardapi-volume-3dd99acb-089b-4165-94df-9b966486599f to disappear +Aug 3 06:28:48.093: INFO: Pod downwardapi-volume-3dd99acb-089b-4165-94df-9b966486599f no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:28:48.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2569" for this suite. + +• [SLOW TEST:6.302 seconds] +[sig-storage] Projected downwardAPI +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":36,"skipped":678,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:28:48.128: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 +[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 06:28:48.232: INFO: The status of Pod test-webserver-1a7c48fd-d524-4b56-8e2a-9f93cd207721 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:28:50.244: INFO: The status of Pod test-webserver-1a7c48fd-d524-4b56-8e2a-9f93cd207721 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:28:52.246: INFO: The status of Pod test-webserver-1a7c48fd-d524-4b56-8e2a-9f93cd207721 is Running (Ready = false) +Aug 3 06:28:54.242: INFO: The status of Pod test-webserver-1a7c48fd-d524-4b56-8e2a-9f93cd207721 is Running (Ready = false) +Aug 3 06:28:56.244: INFO: The status of Pod test-webserver-1a7c48fd-d524-4b56-8e2a-9f93cd207721 is Running (Ready = false) +Aug 3 06:28:58.248: INFO: The status of Pod test-webserver-1a7c48fd-d524-4b56-8e2a-9f93cd207721 is Running (Ready = false) +Aug 3 06:29:00.240: INFO: The status of Pod test-webserver-1a7c48fd-d524-4b56-8e2a-9f93cd207721 is Running (Ready = false) +Aug 3 06:29:02.243: INFO: The status of Pod test-webserver-1a7c48fd-d524-4b56-8e2a-9f93cd207721 is Running (Ready = false) +Aug 3 06:29:04.243: INFO: The status of Pod test-webserver-1a7c48fd-d524-4b56-8e2a-9f93cd207721 is Running (Ready = false) +Aug 3 06:29:06.249: INFO: The status of Pod test-webserver-1a7c48fd-d524-4b56-8e2a-9f93cd207721 is Running (Ready = false) +Aug 3 06:29:08.243: INFO: The status of Pod test-webserver-1a7c48fd-d524-4b56-8e2a-9f93cd207721 is Running (Ready = false) +Aug 3 06:29:10.248: INFO: The status of Pod test-webserver-1a7c48fd-d524-4b56-8e2a-9f93cd207721 is Running (Ready = true) +Aug 3 06:29:10.254: INFO: Container started at 2022-08-03 06:28:51 +0000 UTC, pod became ready at 2022-08-03 06:29:08 +0000 UTC +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:29:10.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-7205" for this suite. + +• [SLOW TEST:22.152 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":346,"completed":37,"skipped":694,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events API + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:29:10.280: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename events +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a test event +STEP: listing events in all namespaces +STEP: listing events in test namespace +STEP: listing events with field selection filtering on source +STEP: listing events with field selection filtering on reportingController +STEP: getting the test event +STEP: patching the test event +STEP: getting the test event +STEP: updating the test event +STEP: getting the test event +STEP: deleting the test event +STEP: listing events in all namespaces +STEP: listing events in test namespace +[AfterEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:29:10.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-2908" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":346,"completed":38,"skipped":707,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:29:10.497: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Aug 3 06:29:10.554: INFO: Waiting up to 5m0s for pod "pod-12062d38-8534-4525-b173-eb2199f97477" in namespace "emptydir-4865" to be "Succeeded or Failed" +Aug 3 06:29:10.561: INFO: Pod "pod-12062d38-8534-4525-b173-eb2199f97477": Phase="Pending", Reason="", readiness=false. Elapsed: 6.195519ms +Aug 3 06:29:12.578: INFO: Pod "pod-12062d38-8534-4525-b173-eb2199f97477": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022954708s +Aug 3 06:29:14.596: INFO: Pod "pod-12062d38-8534-4525-b173-eb2199f97477": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04130756s +STEP: Saw pod success +Aug 3 06:29:14.596: INFO: Pod "pod-12062d38-8534-4525-b173-eb2199f97477" satisfied condition "Succeeded or Failed" +Aug 3 06:29:14.610: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-12062d38-8534-4525-b173-eb2199f97477 container test-container: +STEP: delete the pod +Aug 3 06:29:14.664: INFO: Waiting for pod pod-12062d38-8534-4525-b173-eb2199f97477 to disappear +Aug 3 06:29:14.671: INFO: Pod pod-12062d38-8534-4525-b173-eb2199f97477 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:29:14.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4865" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":39,"skipped":727,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should observe PodDisruptionBudget status updated [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:29:14.698: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename disruption +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should observe PodDisruptionBudget status updated [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for the pdb to be processed +STEP: Waiting for all pods to be running +Aug 3 06:29:16.926: INFO: running pods: 0 < 3 +Aug 3 06:29:18.941: INFO: running pods: 0 < 3 +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:29:20.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-4938" for this suite. + +• [SLOW TEST:6.288 seconds] +[sig-apps] DisruptionController +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should observe PodDisruptionBudget status updated [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":346,"completed":40,"skipped":746,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:29:20.988: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Aug 3 06:29:21.081: INFO: Waiting up to 5m0s for pod "pod-8fbc35b5-af23-40bd-a71c-9c187aa90ac5" in namespace "emptydir-2043" to be "Succeeded or Failed" +Aug 3 06:29:21.095: INFO: Pod "pod-8fbc35b5-af23-40bd-a71c-9c187aa90ac5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.417357ms +Aug 3 06:29:23.108: INFO: Pod "pod-8fbc35b5-af23-40bd-a71c-9c187aa90ac5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027095248s +Aug 3 06:29:25.121: INFO: Pod "pod-8fbc35b5-af23-40bd-a71c-9c187aa90ac5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040343913s +STEP: Saw pod success +Aug 3 06:29:25.122: INFO: Pod "pod-8fbc35b5-af23-40bd-a71c-9c187aa90ac5" satisfied condition "Succeeded or Failed" +Aug 3 06:29:25.129: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-8fbc35b5-af23-40bd-a71c-9c187aa90ac5 container test-container: +STEP: delete the pod +Aug 3 06:29:25.177: INFO: Waiting for pod pod-8fbc35b5-af23-40bd-a71c-9c187aa90ac5 to disappear +Aug 3 06:29:25.183: INFO: Pod pod-8fbc35b5-af23-40bd-a71c-9c187aa90ac5 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:29:25.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-2043" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":41,"skipped":802,"failed":0} +SSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + updates the published spec when one version gets renamed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:29:25.220: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates the published spec when one version gets renamed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: set up a multi version CRD +Aug 3 06:29:25.314: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: rename a version +STEP: check the new version name is served +STEP: check the old version name is removed +STEP: check the other version is not changed +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:29:48.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-725" for this suite. + +• [SLOW TEST:23.520 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + updates the published spec when one version gets renamed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":346,"completed":42,"skipped":811,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:29:48.741: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 06:29:48.833: INFO: Pod name cleanup-pod: Found 0 pods out of 1 +Aug 3 06:29:53.854: INFO: Pod name cleanup-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Aug 3 06:29:53.855: INFO: Creating deployment test-cleanup-deployment +STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Aug 3 06:29:53.889: INFO: Deployment "test-cleanup-deployment": +&Deployment{ObjectMeta:{test-cleanup-deployment deployment-3827 957bf196-d4c4-438d-9081-9b15c973d940 600071 1 2022-08-03 06:29:53 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0048d6508 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} + +Aug 3 06:29:53.895: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. +Aug 3 06:29:53.895: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": +Aug 3 06:29:53.895: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-3827 447be9d2-8f96-4b1e-8402-4ae99b981217 600072 1 2022-08-03 06:29:48 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 957bf196-d4c4-438d-9081-9b15c973d940 0xc0048d69b7 0xc0048d69b8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0048d6a18 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Aug 3 06:29:53.905: INFO: Pod "test-cleanup-controller-9r868" is available: +&Pod{ObjectMeta:{test-cleanup-controller-9r868 test-cleanup-controller- deployment-3827 3720422d-6036-471e-b48a-1bc3cd8731f5 600067 0 2022-08-03 06:29:48 +0000 UTC map[name:cleanup-pod pod:httpd] map[cni.projectcalico.org/ipv4pools:["default-ipv4-ippool"] dce.daocloud.io/parcel.egress.burst:0 dce.daocloud.io/parcel.egress.rate:0 dce.daocloud.io/parcel.ingress.burst:0 dce.daocloud.io/parcel.ingress.rate:0] [{apps/v1 ReplicaSet test-cleanup-controller 447be9d2-8f96-4b1e-8402-4ae99b981217 0xc0048d6f27 0xc0048d6f28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-549vl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-549vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 06:29:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 06:29:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 06:29:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 06:29:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.50,PodIP:172.29.175.24,StartTime:2022-08-03 06:29:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-03 06:29:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:docker://e393ec80d9e3b99d77aefe756918834b198dab8f05599442947d1b0a0483ae01,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.29.175.24,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:29:53.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-3827" for this suite. + +• [SLOW TEST:5.183 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":346,"completed":43,"skipped":822,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:29:53.924: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-f53b5f03-60a5-464e-afd3-4c17f0ae0169 +STEP: Creating a pod to test consume secrets +Aug 3 06:29:54.027: INFO: Waiting up to 5m0s for pod "pod-secrets-02f89377-9121-4388-8288-f498495608c0" in namespace "secrets-3643" to be "Succeeded or Failed" +Aug 3 06:29:54.034: INFO: Pod "pod-secrets-02f89377-9121-4388-8288-f498495608c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.607248ms +Aug 3 06:29:56.043: INFO: Pod "pod-secrets-02f89377-9121-4388-8288-f498495608c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015543655s +Aug 3 06:29:58.056: INFO: Pod "pod-secrets-02f89377-9121-4388-8288-f498495608c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02864084s +Aug 3 06:30:00.066: INFO: Pod "pod-secrets-02f89377-9121-4388-8288-f498495608c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038664888s +STEP: Saw pod success +Aug 3 06:30:00.066: INFO: Pod "pod-secrets-02f89377-9121-4388-8288-f498495608c0" satisfied condition "Succeeded or Failed" +Aug 3 06:30:00.071: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-secrets-02f89377-9121-4388-8288-f498495608c0 container secret-volume-test: +STEP: delete the pod +Aug 3 06:30:00.134: INFO: Waiting for pod pod-secrets-02f89377-9121-4388-8288-f498495608c0 to disappear +Aug 3 06:30:00.139: INFO: Pod pod-secrets-02f89377-9121-4388-8288-f498495608c0 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:30:00.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-3643" for this suite. + +• [SLOW TEST:6.240 seconds] +[sig-storage] Secrets +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":44,"skipped":830,"failed":0} +SSS +------------------------------ +[sig-network] DNS + should provide DNS for services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:30:00.165: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9400.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9400.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9400.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9400.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9400.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9400.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9400.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9400.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9400.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9400.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 155.92.31.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.31.92.155_udp@PTR;check="$$(dig +tcp +noall +answer +search 155.92.31.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.31.92.155_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9400.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9400.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9400.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9400.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9400.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9400.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9400.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9400.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9400.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9400.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 155.92.31.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.31.92.155_udp@PTR;check="$$(dig +tcp +noall +answer +search 155.92.31.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.31.92.155_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Aug 3 06:30:04.388: INFO: Unable to read wheezy_udp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:04.395: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:04.402: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:04.410: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:04.450: INFO: Unable to read jessie_udp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:04.457: INFO: Unable to read jessie_tcp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:04.466: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:04.474: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:04.508: INFO: Lookups using dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421 failed for: [wheezy_udp@dns-test-service.dns-9400.svc.cluster.local wheezy_tcp@dns-test-service.dns-9400.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local jessie_udp@dns-test-service.dns-9400.svc.cluster.local jessie_tcp@dns-test-service.dns-9400.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local] + +Aug 3 06:30:09.516: INFO: Unable to read wheezy_udp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:09.523: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:09.530: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:09.537: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:09.573: INFO: Unable to read jessie_udp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:09.582: INFO: Unable to read jessie_tcp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:09.587: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:09.594: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:09.617: INFO: Lookups using dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421 failed for: [wheezy_udp@dns-test-service.dns-9400.svc.cluster.local wheezy_tcp@dns-test-service.dns-9400.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local jessie_udp@dns-test-service.dns-9400.svc.cluster.local jessie_tcp@dns-test-service.dns-9400.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local] + +Aug 3 06:30:14.518: INFO: Unable to read wheezy_udp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:14.524: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:14.533: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:14.544: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:14.603: INFO: Unable to read jessie_udp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:14.613: INFO: Unable to read jessie_tcp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:14.637: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:14.649: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:14.716: INFO: Lookups using dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421 failed for: [wheezy_udp@dns-test-service.dns-9400.svc.cluster.local wheezy_tcp@dns-test-service.dns-9400.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local jessie_udp@dns-test-service.dns-9400.svc.cluster.local jessie_tcp@dns-test-service.dns-9400.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local] + +Aug 3 06:30:19.521: INFO: Unable to read wheezy_udp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:19.528: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:19.535: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:19.542: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:19.588: INFO: Unable to read jessie_udp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:19.595: INFO: Unable to read jessie_tcp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:19.602: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:19.610: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:19.642: INFO: Lookups using dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421 failed for: [wheezy_udp@dns-test-service.dns-9400.svc.cluster.local wheezy_tcp@dns-test-service.dns-9400.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local jessie_udp@dns-test-service.dns-9400.svc.cluster.local jessie_tcp@dns-test-service.dns-9400.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local] + +Aug 3 06:30:24.519: INFO: Unable to read wheezy_udp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:24.525: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:24.531: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:24.539: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:24.571: INFO: Unable to read jessie_udp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:24.577: INFO: Unable to read jessie_tcp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:24.582: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:24.588: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:24.620: INFO: Lookups using dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421 failed for: [wheezy_udp@dns-test-service.dns-9400.svc.cluster.local wheezy_tcp@dns-test-service.dns-9400.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local jessie_udp@dns-test-service.dns-9400.svc.cluster.local jessie_tcp@dns-test-service.dns-9400.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local] + +Aug 3 06:30:29.520: INFO: Unable to read wheezy_udp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:29.528: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:29.542: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:29.550: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:29.592: INFO: Unable to read jessie_udp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:29.598: INFO: Unable to read jessie_tcp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:29.606: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:29.616: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:29.643: INFO: Lookups using dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421 failed for: [wheezy_udp@dns-test-service.dns-9400.svc.cluster.local wheezy_tcp@dns-test-service.dns-9400.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local jessie_udp@dns-test-service.dns-9400.svc.cluster.local jessie_tcp@dns-test-service.dns-9400.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local] + +Aug 3 06:30:34.518: INFO: Unable to read wheezy_udp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:34.523: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:34.533: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:34.539: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:34.574: INFO: Unable to read jessie_udp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:34.580: INFO: Unable to read jessie_tcp@dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:34.586: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:34.592: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local from pod dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421: the server could not find the requested resource (get pods dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421) +Aug 3 06:30:34.625: INFO: Lookups using dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421 failed for: [wheezy_udp@dns-test-service.dns-9400.svc.cluster.local wheezy_tcp@dns-test-service.dns-9400.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local jessie_udp@dns-test-service.dns-9400.svc.cluster.local jessie_tcp@dns-test-service.dns-9400.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9400.svc.cluster.local] + +Aug 3 06:30:39.655: INFO: DNS probes using dns-9400/dns-test-f112aa7c-c18e-4ec8-9afb-61e9c038c421 succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:30:39.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-9400" for this suite. + +• [SLOW TEST:39.627 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should provide DNS for services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":346,"completed":45,"skipped":833,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replica set. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:30:39.792: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a replica set. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ReplicaSet +STEP: Ensuring resource quota status captures replicaset creation +STEP: Deleting a ReplicaSet +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:30:50.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-8349" for this suite. + +• [SLOW TEST:11.185 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a replica set. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":346,"completed":46,"skipped":852,"failed":0} +SS +------------------------------ +[sig-api-machinery] Garbage collector + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:30:50.980: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +Aug 3 06:30:57.192: INFO: 80 pods remaining +Aug 3 06:30:57.192: INFO: 80 pods has nil DeletionTimestamp +Aug 3 06:30:57.192: INFO: +Aug 3 06:30:58.209: INFO: 75 pods remaining +Aug 3 06:30:58.209: INFO: 73 pods has nil DeletionTimestamp +Aug 3 06:30:58.209: INFO: +Aug 3 06:30:59.394: INFO: 60 pods remaining +Aug 3 06:30:59.395: INFO: 60 pods has nil DeletionTimestamp +Aug 3 06:30:59.395: INFO: +Aug 3 06:31:00.204: INFO: 40 pods remaining +Aug 3 06:31:00.204: INFO: 40 pods has nil DeletionTimestamp +Aug 3 06:31:00.204: INFO: +Aug 3 06:31:01.185: INFO: 33 pods remaining +Aug 3 06:31:01.185: INFO: 32 pods has nil DeletionTimestamp +Aug 3 06:31:01.185: INFO: +Aug 3 06:31:02.188: INFO: 20 pods remaining +Aug 3 06:31:02.188: INFO: 20 pods has nil DeletionTimestamp +Aug 3 06:31:02.188: INFO: +Aug 3 06:31:03.236: INFO: 4 pods remaining +Aug 3 06:31:03.236: INFO: 0 pods has nil DeletionTimestamp +Aug 3 06:31:03.236: INFO: +STEP: Gathering metrics +Aug 3 06:31:04.310: INFO: The status of Pod dce-kube-controller-manager-dce-10-6-213-30 is Running (Ready = true) +Aug 3 06:32:04.929: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:32:04.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-6661" for this suite. + +• [SLOW TEST:73.995 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":346,"completed":47,"skipped":854,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:32:04.977: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-d11992a0-b7ef-40b7-a1ad-b0cfcbf2e190 +STEP: Creating a pod to test consume secrets +Aug 3 06:32:05.813: INFO: Waiting up to 5m0s for pod "pod-secrets-d0fffcf8-234a-4ebc-84ba-b42deca967e2" in namespace "secrets-8461" to be "Succeeded or Failed" +Aug 3 06:32:05.820: INFO: Pod "pod-secrets-d0fffcf8-234a-4ebc-84ba-b42deca967e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.574797ms +Aug 3 06:32:07.832: INFO: Pod "pod-secrets-d0fffcf8-234a-4ebc-84ba-b42deca967e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018188695s +Aug 3 06:32:09.844: INFO: Pod "pod-secrets-d0fffcf8-234a-4ebc-84ba-b42deca967e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030224212s +Aug 3 06:32:11.855: INFO: Pod "pod-secrets-d0fffcf8-234a-4ebc-84ba-b42deca967e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041639875s +STEP: Saw pod success +Aug 3 06:32:11.855: INFO: Pod "pod-secrets-d0fffcf8-234a-4ebc-84ba-b42deca967e2" satisfied condition "Succeeded or Failed" +Aug 3 06:32:11.862: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-secrets-d0fffcf8-234a-4ebc-84ba-b42deca967e2 container secret-volume-test: +STEP: delete the pod +Aug 3 06:32:11.921: INFO: Waiting for pod pod-secrets-d0fffcf8-234a-4ebc-84ba-b42deca967e2 to disappear +Aug 3 06:32:11.927: INFO: Pod pod-secrets-d0fffcf8-234a-4ebc-84ba-b42deca967e2 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:32:11.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-8461" for this suite. +STEP: Destroying namespace "secret-namespace-2086" for this suite. + +• [SLOW TEST:6.999 seconds] +[sig-storage] Secrets +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":346,"completed":48,"skipped":903,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + removes definition from spec when one version gets changed to not be served [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:32:11.976: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] removes definition from spec when one version gets changed to not be served [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: set up a multi version CRD +Aug 3 06:32:12.078: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: mark a version not serverd +STEP: check the unserved version gets removed +STEP: check the other version is not changed +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:32:36.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-9595" for this suite. + +• [SLOW TEST:24.829 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + removes definition from spec when one version gets changed to not be served [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":346,"completed":49,"skipped":907,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:32:36.806: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-00d61f0d-5b72-4ab6-b6d1-ece250aeb589 +STEP: Creating a pod to test consume configMaps +Aug 3 06:32:36.905: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cb2071ea-988e-4af2-9b66-a1241e2b42c6" in namespace "projected-8739" to be "Succeeded or Failed" +Aug 3 06:32:36.916: INFO: Pod "pod-projected-configmaps-cb2071ea-988e-4af2-9b66-a1241e2b42c6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.770574ms +Aug 3 06:32:38.934: INFO: Pod "pod-projected-configmaps-cb2071ea-988e-4af2-9b66-a1241e2b42c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02883255s +Aug 3 06:32:40.945: INFO: Pod "pod-projected-configmaps-cb2071ea-988e-4af2-9b66-a1241e2b42c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040308022s +Aug 3 06:32:42.959: INFO: Pod "pod-projected-configmaps-cb2071ea-988e-4af2-9b66-a1241e2b42c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054073451s +STEP: Saw pod success +Aug 3 06:32:42.959: INFO: Pod "pod-projected-configmaps-cb2071ea-988e-4af2-9b66-a1241e2b42c6" satisfied condition "Succeeded or Failed" +Aug 3 06:32:42.965: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-projected-configmaps-cb2071ea-988e-4af2-9b66-a1241e2b42c6 container agnhost-container: +STEP: delete the pod +Aug 3 06:32:43.005: INFO: Waiting for pod pod-projected-configmaps-cb2071ea-988e-4af2-9b66-a1241e2b42c6 to disappear +Aug 3 06:32:43.010: INFO: Pod pod-projected-configmaps-cb2071ea-988e-4af2-9b66-a1241e2b42c6 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:32:43.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8739" for this suite. + +• [SLOW TEST:6.219 seconds] +[sig-storage] Projected configMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":50,"skipped":917,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl expose + should create services for rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:32:43.026: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should create services for rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating Agnhost RC +Aug 3 06:32:43.078: INFO: namespace kubectl-3045 +Aug 3 06:32:43.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-3045 create -f -' +Aug 3 06:32:44.210: INFO: stderr: "" +Aug 3 06:32:44.210: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Aug 3 06:32:45.225: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 3 06:32:45.225: INFO: Found 0 / 1 +Aug 3 06:32:46.219: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 3 06:32:46.219: INFO: Found 0 / 1 +Aug 3 06:32:47.224: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 3 06:32:47.224: INFO: Found 0 / 1 +Aug 3 06:32:48.217: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 3 06:32:48.217: INFO: Found 1 / 1 +Aug 3 06:32:48.217: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Aug 3 06:32:48.222: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 3 06:32:48.222: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Aug 3 06:32:48.222: INFO: wait on agnhost-primary startup in kubectl-3045 +Aug 3 06:32:48.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-3045 logs agnhost-primary-tqcdc agnhost-primary' +Aug 3 06:32:48.366: INFO: stderr: "" +Aug 3 06:32:48.366: INFO: stdout: "Paused\n" +STEP: exposing RC +Aug 3 06:32:48.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-3045 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' +Aug 3 06:32:48.536: INFO: stderr: "" +Aug 3 06:32:48.536: INFO: stdout: "service/rm2 exposed\n" +Aug 3 06:32:48.551: INFO: Service rm2 in namespace kubectl-3045 found. +STEP: exposing service +Aug 3 06:32:50.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-3045 expose service rm2 --name=rm3 --port=2345 --target-port=6379' +Aug 3 06:32:50.733: INFO: stderr: "" +Aug 3 06:32:50.733: INFO: stdout: "service/rm3 exposed\n" +Aug 3 06:32:50.744: INFO: Service rm3 in namespace kubectl-3045 found. +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:32:52.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-3045" for this suite. + +• [SLOW TEST:9.755 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Kubectl expose + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1246 + should create services for rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":346,"completed":51,"skipped":930,"failed":0} +SS +------------------------------ +[sig-storage] Secrets + should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:32:52.781: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:32:52.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-160" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":346,"completed":52,"skipped":932,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:32:52.988: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename namespaces +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test namespace +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a service in the namespace +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Verifying there is no service in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:32:59.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-6001" for this suite. +STEP: Destroying namespace "nsdeletetest-1223" for this suite. +Aug 3 06:32:59.292: INFO: Namespace nsdeletetest-1223 was already deleted +STEP: Destroying namespace "nsdeletetest-2181" for this suite. + +• [SLOW TEST:6.314 seconds] +[sig-api-machinery] Namespaces [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":346,"completed":53,"skipped":953,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:32:59.303: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs +STEP: Gathering metrics +Aug 3 06:33:00.533: INFO: The status of Pod dce-kube-controller-manager-dce-10-6-213-30 is Running (Ready = true) +Aug 3 06:34:00.774: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:34:00.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-7789" for this suite. + +• [SLOW TEST:61.500 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":346,"completed":54,"skipped":979,"failed":0} +SSSSSSS +------------------------------ +[sig-network] Services + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:34:00.802: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-4437 +STEP: creating service affinity-clusterip in namespace services-4437 +STEP: creating replication controller affinity-clusterip in namespace services-4437 +I0803 06:34:00.898205 21 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-4437, replica count: 3 +I0803 06:34:03.949503 21 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0803 06:34:06.949827 21 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 3 06:34:06.967: INFO: Creating new exec pod +Aug 3 06:34:13.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-4437 exec execpod-affinity6559h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' +Aug 3 06:34:14.285: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" +Aug 3 06:34:14.285: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 06:34:14.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-4437 exec execpod-affinity6559h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.249.124 80' +Aug 3 06:34:14.549: INFO: stderr: "+ + echonc hostName\n -v -t -w 2 172.31.249.124 80\nConnection to 172.31.249.124 80 port [tcp/http] succeeded!\n" +Aug 3 06:34:14.549: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 06:34:14.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-4437 exec execpod-affinity6559h -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.31.249.124:80/ ; done' +Aug 3 06:34:14.965: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.249.124:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.249.124:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.249.124:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.249.124:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.249.124:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.249.124:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.249.124:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.249.124:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.249.124:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.249.124:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.249.124:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.249.124:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.249.124:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.249.124:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.249.124:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.249.124:80/\n" +Aug 3 06:34:14.965: INFO: stdout: "\naffinity-clusterip-c9dmf\naffinity-clusterip-c9dmf\naffinity-clusterip-c9dmf\naffinity-clusterip-c9dmf\naffinity-clusterip-c9dmf\naffinity-clusterip-c9dmf\naffinity-clusterip-c9dmf\naffinity-clusterip-c9dmf\naffinity-clusterip-c9dmf\naffinity-clusterip-c9dmf\naffinity-clusterip-c9dmf\naffinity-clusterip-c9dmf\naffinity-clusterip-c9dmf\naffinity-clusterip-c9dmf\naffinity-clusterip-c9dmf\naffinity-clusterip-c9dmf" +Aug 3 06:34:14.965: INFO: Received response from host: affinity-clusterip-c9dmf +Aug 3 06:34:14.965: INFO: Received response from host: affinity-clusterip-c9dmf +Aug 3 06:34:14.965: INFO: Received response from host: affinity-clusterip-c9dmf +Aug 3 06:34:14.965: INFO: Received response from host: affinity-clusterip-c9dmf +Aug 3 06:34:14.965: INFO: Received response from host: affinity-clusterip-c9dmf +Aug 3 06:34:14.965: INFO: Received response from host: affinity-clusterip-c9dmf +Aug 3 06:34:14.965: INFO: Received response from host: affinity-clusterip-c9dmf +Aug 3 06:34:14.965: INFO: Received response from host: affinity-clusterip-c9dmf +Aug 3 06:34:14.965: INFO: Received response from host: affinity-clusterip-c9dmf +Aug 3 06:34:14.965: INFO: Received response from host: affinity-clusterip-c9dmf +Aug 3 06:34:14.965: INFO: Received response from host: affinity-clusterip-c9dmf +Aug 3 06:34:14.965: INFO: Received response from host: affinity-clusterip-c9dmf +Aug 3 06:34:14.965: INFO: Received response from host: affinity-clusterip-c9dmf +Aug 3 06:34:14.965: INFO: Received response from host: affinity-clusterip-c9dmf +Aug 3 06:34:14.965: INFO: Received response from host: affinity-clusterip-c9dmf +Aug 3 06:34:14.965: INFO: Received response from host: affinity-clusterip-c9dmf +Aug 3 06:34:14.965: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip in namespace services-4437, will wait for the garbage collector to delete the pods +Aug 3 06:34:15.055: INFO: Deleting ReplicationController affinity-clusterip took: 15.115643ms +Aug 3 06:34:15.156: INFO: Terminating ReplicationController affinity-clusterip pods took: 101.006281ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:34:19.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-4437" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:18.624 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":55,"skipped":986,"failed":0} +[sig-storage] ConfigMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:34:19.426: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-cb90b67e-4eb3-42f0-899a-2103064fb2e8 +STEP: Creating a pod to test consume configMaps +Aug 3 06:34:19.546: INFO: Waiting up to 5m0s for pod "pod-configmaps-c0f1efdf-c18e-49c8-b615-e98a1ffed785" in namespace "configmap-195" to be "Succeeded or Failed" +Aug 3 06:34:19.558: INFO: Pod "pod-configmaps-c0f1efdf-c18e-49c8-b615-e98a1ffed785": Phase="Pending", Reason="", readiness=false. Elapsed: 11.880444ms +Aug 3 06:34:21.568: INFO: Pod "pod-configmaps-c0f1efdf-c18e-49c8-b615-e98a1ffed785": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022164583s +Aug 3 06:34:23.580: INFO: Pod "pod-configmaps-c0f1efdf-c18e-49c8-b615-e98a1ffed785": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033767599s +Aug 3 06:34:25.593: INFO: Pod "pod-configmaps-c0f1efdf-c18e-49c8-b615-e98a1ffed785": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046939244s +STEP: Saw pod success +Aug 3 06:34:25.593: INFO: Pod "pod-configmaps-c0f1efdf-c18e-49c8-b615-e98a1ffed785" satisfied condition "Succeeded or Failed" +Aug 3 06:34:25.599: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-configmaps-c0f1efdf-c18e-49c8-b615-e98a1ffed785 container agnhost-container: +STEP: delete the pod +Aug 3 06:34:25.659: INFO: Waiting for pod pod-configmaps-c0f1efdf-c18e-49c8-b615-e98a1ffed785 to disappear +Aug 3 06:34:25.664: INFO: Pod pod-configmaps-c0f1efdf-c18e-49c8-b615-e98a1ffed785 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:34:25.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-195" for this suite. + +• [SLOW TEST:6.256 seconds] +[sig-storage] ConfigMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":56,"skipped":986,"failed":0} +SSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath + runs ReplicaSets to verify preemption running path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:34:25.683: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename sched-preemption +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Aug 3 06:34:25.800: INFO: Waiting up to 1m0s for all nodes to be ready +Aug 3 06:35:25.897: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:35:25.903: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename sched-preemption-path +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 +STEP: Finding an available node +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +Aug 3 06:35:30.035: INFO: found a healthy node: dce-10-6-213-50 +[It] runs ReplicaSets to verify preemption running path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 06:35:46.197: INFO: pods created so far: [1 1 1] +Aug 3 06:35:46.197: INFO: length of pods created so far: 3 +Aug 3 06:35:56.223: INFO: pods created so far: [2 2 1] +[AfterEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:36:03.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-path-664" for this suite. +[AfterEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:36:03.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-9899" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 + +• [SLOW TEST:97.942 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 + runs ReplicaSets to verify preemption running path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":346,"completed":57,"skipped":991,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:36:03.625: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Aug 3 06:36:03.726: INFO: Waiting up to 5m0s for pod "downwardapi-volume-700e1c96-7e6b-4d66-9bd9-da8129cae88d" in namespace "downward-api-3770" to be "Succeeded or Failed" +Aug 3 06:36:03.734: INFO: Pod "downwardapi-volume-700e1c96-7e6b-4d66-9bd9-da8129cae88d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.174999ms +Aug 3 06:36:05.746: INFO: Pod "downwardapi-volume-700e1c96-7e6b-4d66-9bd9-da8129cae88d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019525422s +Aug 3 06:36:07.762: INFO: Pod "downwardapi-volume-700e1c96-7e6b-4d66-9bd9-da8129cae88d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035970554s +Aug 3 06:36:09.771: INFO: Pod "downwardapi-volume-700e1c96-7e6b-4d66-9bd9-da8129cae88d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044248762s +STEP: Saw pod success +Aug 3 06:36:09.771: INFO: Pod "downwardapi-volume-700e1c96-7e6b-4d66-9bd9-da8129cae88d" satisfied condition "Succeeded or Failed" +Aug 3 06:36:09.779: INFO: Trying to get logs from node dce-10-6-213-50 pod downwardapi-volume-700e1c96-7e6b-4d66-9bd9-da8129cae88d container client-container: +STEP: delete the pod +Aug 3 06:36:09.880: INFO: Waiting for pod downwardapi-volume-700e1c96-7e6b-4d66-9bd9-da8129cae88d to disappear +Aug 3 06:36:09.889: INFO: Pod downwardapi-volume-700e1c96-7e6b-4d66-9bd9-da8129cae88d no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:36:09.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3770" for this suite. + +• [SLOW TEST:6.295 seconds] +[sig-storage] Downward API volume +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":58,"skipped":1004,"failed":0} +SSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:36:09.920: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 06:36:10.027: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Aug 3 06:36:13.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-1389 --namespace=crd-publish-openapi-1389 create -f -' +Aug 3 06:36:14.557: INFO: stderr: "" +Aug 3 06:36:14.558: INFO: stdout: "e2e-test-crd-publish-openapi-2431-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Aug 3 06:36:14.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-1389 --namespace=crd-publish-openapi-1389 delete e2e-test-crd-publish-openapi-2431-crds test-cr' +Aug 3 06:36:14.704: INFO: stderr: "" +Aug 3 06:36:14.704: INFO: stdout: "e2e-test-crd-publish-openapi-2431-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +Aug 3 06:36:14.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-1389 --namespace=crd-publish-openapi-1389 apply -f -' +Aug 3 06:36:14.975: INFO: stderr: "" +Aug 3 06:36:14.975: INFO: stdout: "e2e-test-crd-publish-openapi-2431-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Aug 3 06:36:14.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-1389 --namespace=crd-publish-openapi-1389 delete e2e-test-crd-publish-openapi-2431-crds test-cr' +Aug 3 06:36:15.122: INFO: stderr: "" +Aug 3 06:36:15.122: INFO: stdout: "e2e-test-crd-publish-openapi-2431-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Aug 3 06:36:15.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-1389 explain e2e-test-crd-publish-openapi-2431-crds' +Aug 3 06:36:16.183: INFO: stderr: "" +Aug 3 06:36:16.183: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-2431-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:36:19.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-1389" for this suite. + +• [SLOW TEST:9.909 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":346,"completed":59,"skipped":1013,"failed":0} +SSSSSSSSS +------------------------------ +[sig-network] Service endpoints latency + should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Service endpoints latency + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:36:19.829: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename svc-latency +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 06:36:19.905: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: creating replication controller svc-latency-rc in namespace svc-latency-4705 +I0803 06:36:19.914347 21 runners.go:193] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4705, replica count: 1 +I0803 06:36:20.966152 21 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0803 06:36:21.968095 21 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0803 06:36:22.968737 21 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0803 06:36:23.969901 21 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0803 06:36:24.971003 21 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 3 06:36:25.093: INFO: Created: latency-svc-mtjtm +Aug 3 06:36:25.109: INFO: Got endpoints: latency-svc-mtjtm [37.275688ms] +Aug 3 06:36:25.135: INFO: Created: latency-svc-4jmvn +Aug 3 06:36:25.144: INFO: Got endpoints: latency-svc-4jmvn [35.322038ms] +Aug 3 06:36:25.151: INFO: Created: latency-svc-nvnvw +Aug 3 06:36:25.167: INFO: Got endpoints: latency-svc-nvnvw [57.881398ms] +Aug 3 06:36:25.168: INFO: Created: latency-svc-wtpn6 +Aug 3 06:36:25.182: INFO: Got endpoints: latency-svc-wtpn6 [72.080302ms] +Aug 3 06:36:25.189: INFO: Created: latency-svc-6prg7 +Aug 3 06:36:25.222: INFO: Got endpoints: latency-svc-6prg7 [111.48463ms] +Aug 3 06:36:25.228: INFO: Created: latency-svc-gjmkv +Aug 3 06:36:25.242: INFO: Got endpoints: latency-svc-gjmkv [131.264645ms] +Aug 3 06:36:25.262: INFO: Created: latency-svc-qljdz +Aug 3 06:36:25.282: INFO: Got endpoints: latency-svc-qljdz [171.544851ms] +Aug 3 06:36:25.297: INFO: Created: latency-svc-rkk4r +Aug 3 06:36:25.331: INFO: Got endpoints: latency-svc-rkk4r [221.406434ms] +Aug 3 06:36:25.360: INFO: Created: latency-svc-vvtjz +Aug 3 06:36:25.391: INFO: Got endpoints: latency-svc-vvtjz [280.998998ms] +Aug 3 06:36:25.402: INFO: Created: latency-svc-586h6 +Aug 3 06:36:25.412: INFO: Got endpoints: latency-svc-586h6 [301.742629ms] +Aug 3 06:36:25.425: INFO: Created: latency-svc-2tcph +Aug 3 06:36:25.443: INFO: Created: latency-svc-dxfx8 +Aug 3 06:36:25.444: INFO: Got endpoints: latency-svc-2tcph [334.369306ms] +Aug 3 06:36:25.466: INFO: Got endpoints: latency-svc-dxfx8 [356.113709ms] +Aug 3 06:36:25.482: INFO: Created: latency-svc-jbww8 +Aug 3 06:36:25.498: INFO: Got endpoints: latency-svc-jbww8 [387.68148ms] +Aug 3 06:36:25.517: INFO: Created: latency-svc-9mhs8 +Aug 3 06:36:25.527: INFO: Got endpoints: latency-svc-9mhs8 [417.361036ms] +Aug 3 06:36:25.568: INFO: Created: latency-svc-v6b7h +Aug 3 06:36:25.597: INFO: Created: latency-svc-lnptz +Aug 3 06:36:25.601: INFO: Got endpoints: latency-svc-v6b7h [491.421602ms] +Aug 3 06:36:25.629: INFO: Got endpoints: latency-svc-lnptz [518.962486ms] +Aug 3 06:36:25.638: INFO: Created: latency-svc-9hv29 +Aug 3 06:36:25.657: INFO: Got endpoints: latency-svc-9hv29 [513.168023ms] +Aug 3 06:36:25.658: INFO: Created: latency-svc-4zvk9 +Aug 3 06:36:25.671: INFO: Got endpoints: latency-svc-4zvk9 [504.145339ms] +Aug 3 06:36:25.675: INFO: Created: latency-svc-nd4rn +Aug 3 06:36:25.711: INFO: Got endpoints: latency-svc-nd4rn [528.104827ms] +Aug 3 06:36:25.712: INFO: Created: latency-svc-nc4kd +Aug 3 06:36:25.735: INFO: Got endpoints: latency-svc-nc4kd [513.588571ms] +Aug 3 06:36:25.763: INFO: Created: latency-svc-mf22d +Aug 3 06:36:25.782: INFO: Got endpoints: latency-svc-mf22d [540.533636ms] +Aug 3 06:36:25.812: INFO: Created: latency-svc-s9rh4 +Aug 3 06:36:25.823: INFO: Created: latency-svc-x98ks +Aug 3 06:36:25.849: INFO: Got endpoints: latency-svc-s9rh4 [566.707465ms] +Aug 3 06:36:25.879: INFO: Got endpoints: latency-svc-x98ks [547.521247ms] +Aug 3 06:36:25.883: INFO: Created: latency-svc-sbmj6 +Aug 3 06:36:25.897: INFO: Got endpoints: latency-svc-sbmj6 [505.709577ms] +Aug 3 06:36:25.909: INFO: Created: latency-svc-sz9ch +Aug 3 06:36:25.931: INFO: Got endpoints: latency-svc-sz9ch [519.220555ms] +Aug 3 06:36:25.937: INFO: Created: latency-svc-fqqrg +Aug 3 06:36:25.947: INFO: Got endpoints: latency-svc-fqqrg [503.169322ms] +Aug 3 06:36:25.956: INFO: Created: latency-svc-5ss5l +Aug 3 06:36:25.968: INFO: Got endpoints: latency-svc-5ss5l [502.38684ms] +Aug 3 06:36:25.976: INFO: Created: latency-svc-6zv8k +Aug 3 06:36:25.988: INFO: Got endpoints: latency-svc-6zv8k [490.57766ms] +Aug 3 06:36:25.997: INFO: Created: latency-svc-pmhmj +Aug 3 06:36:26.007: INFO: Created: latency-svc-vs7j5 +Aug 3 06:36:26.007: INFO: Got endpoints: latency-svc-pmhmj [479.922941ms] +Aug 3 06:36:26.024: INFO: Got endpoints: latency-svc-vs7j5 [422.183004ms] +Aug 3 06:36:26.029: INFO: Created: latency-svc-2w8wk +Aug 3 06:36:26.050: INFO: Got endpoints: latency-svc-2w8wk [421.09394ms] +Aug 3 06:36:26.057: INFO: Created: latency-svc-xmrx4 +Aug 3 06:36:26.072: INFO: Got endpoints: latency-svc-xmrx4 [414.708154ms] +Aug 3 06:36:26.075: INFO: Created: latency-svc-7mr4x +Aug 3 06:36:26.100: INFO: Got endpoints: latency-svc-7mr4x [428.906255ms] +Aug 3 06:36:26.116: INFO: Created: latency-svc-htmx6 +Aug 3 06:36:26.210: INFO: Created: latency-svc-xrh6c +Aug 3 06:36:26.215: INFO: Got endpoints: latency-svc-htmx6 [504.318472ms] +Aug 3 06:36:26.235: INFO: Created: latency-svc-srv4h +Aug 3 06:36:26.235: INFO: Got endpoints: latency-svc-xrh6c [499.130998ms] +Aug 3 06:36:26.298: INFO: Got endpoints: latency-svc-srv4h [515.748782ms] +Aug 3 06:36:26.299: INFO: Created: latency-svc-zx7bv +Aug 3 06:36:26.314: INFO: Got endpoints: latency-svc-zx7bv [465.311062ms] +Aug 3 06:36:26.319: INFO: Created: latency-svc-hhscd +Aug 3 06:36:26.330: INFO: Created: latency-svc-vcsz2 +Aug 3 06:36:26.334: INFO: Got endpoints: latency-svc-hhscd [455.016838ms] +Aug 3 06:36:26.340: INFO: Got endpoints: latency-svc-vcsz2 [442.85618ms] +Aug 3 06:36:26.343: INFO: Created: latency-svc-t5vvk +Aug 3 06:36:26.354: INFO: Got endpoints: latency-svc-t5vvk [422.63894ms] +Aug 3 06:36:26.354: INFO: Created: latency-svc-ntklv +Aug 3 06:36:26.368: INFO: Got endpoints: latency-svc-ntklv [421.035927ms] +Aug 3 06:36:26.377: INFO: Created: latency-svc-h8qqp +Aug 3 06:36:26.398: INFO: Got endpoints: latency-svc-h8qqp [429.587075ms] +Aug 3 06:36:26.417: INFO: Created: latency-svc-wgmgb +Aug 3 06:36:26.429: INFO: Got endpoints: latency-svc-wgmgb [440.754358ms] +Aug 3 06:36:26.439: INFO: Created: latency-svc-sz2dl +Aug 3 06:36:26.457: INFO: Got endpoints: latency-svc-sz2dl [449.745504ms] +Aug 3 06:36:26.717: INFO: Created: latency-svc-6z6zm +Aug 3 06:36:26.717: INFO: Created: latency-svc-v2p5f +Aug 3 06:36:26.717: INFO: Created: latency-svc-9hrtf +Aug 3 06:36:26.717: INFO: Created: latency-svc-ndqw7 +Aug 3 06:36:26.717: INFO: Created: latency-svc-fhwqn +Aug 3 06:36:26.717: INFO: Created: latency-svc-7jv6k +Aug 3 06:36:26.717: INFO: Created: latency-svc-9m6d5 +Aug 3 06:36:26.717: INFO: Created: latency-svc-vnj57 +Aug 3 06:36:26.718: INFO: Created: latency-svc-jjtvb +Aug 3 06:36:26.719: INFO: Created: latency-svc-6crns +Aug 3 06:36:26.719: INFO: Created: latency-svc-95wps +Aug 3 06:36:26.720: INFO: Created: latency-svc-6kpdj +Aug 3 06:36:26.732: INFO: Created: latency-svc-q877g +Aug 3 06:36:26.733: INFO: Created: latency-svc-wpvln +Aug 3 06:36:26.741: INFO: Created: latency-svc-tnfnm +Aug 3 06:36:26.742: INFO: Got endpoints: latency-svc-9m6d5 [408.59383ms] +Aug 3 06:36:26.763: INFO: Got endpoints: latency-svc-jjtvb [305.484913ms] +Aug 3 06:36:26.800: INFO: Got endpoints: latency-svc-q877g [749.592627ms] +Aug 3 06:36:26.800: INFO: Got endpoints: latency-svc-tnfnm [699.91503ms] +Aug 3 06:36:26.808: INFO: Created: latency-svc-rg6wc +Aug 3 06:36:26.823: INFO: Got endpoints: latency-svc-6crns [587.958367ms] +Aug 3 06:36:26.828: INFO: Got endpoints: latency-svc-fhwqn [612.918949ms] +Aug 3 06:36:26.828: INFO: Got endpoints: latency-svc-v2p5f [430.075081ms] +Aug 3 06:36:26.828: INFO: Got endpoints: latency-svc-wpvln [473.720946ms] +Aug 3 06:36:26.828: INFO: Got endpoints: latency-svc-6kpdj [513.933676ms] +Aug 3 06:36:26.855: INFO: Created: latency-svc-t2z5j +Aug 3 06:36:26.856: INFO: Got endpoints: latency-svc-7jv6k [487.478402ms] +Aug 3 06:36:26.856: INFO: Got endpoints: latency-svc-95wps [515.78174ms] +Aug 3 06:36:26.856: INFO: Got endpoints: latency-svc-9hrtf [426.724462ms] +Aug 3 06:36:26.865: INFO: Got endpoints: latency-svc-vnj57 [841.217129ms] +Aug 3 06:36:26.904: INFO: Got endpoints: latency-svc-6z6zm [605.146034ms] +Aug 3 06:36:26.904: INFO: Got endpoints: latency-svc-ndqw7 [831.962742ms] +Aug 3 06:36:26.929: INFO: Got endpoints: latency-svc-rg6wc [186.211612ms] +Aug 3 06:36:26.932: INFO: Got endpoints: latency-svc-t2z5j [169.546369ms] +Aug 3 06:36:26.933: INFO: Created: latency-svc-krm6t +Aug 3 06:36:26.945: INFO: Got endpoints: latency-svc-krm6t [144.359365ms] +Aug 3 06:36:26.949: INFO: Created: latency-svc-hngkh +Aug 3 06:36:26.966: INFO: Got endpoints: latency-svc-hngkh [166.108428ms] +Aug 3 06:36:26.976: INFO: Created: latency-svc-pgsrk +Aug 3 06:36:26.997: INFO: Got endpoints: latency-svc-pgsrk [174.314305ms] +Aug 3 06:36:27.008: INFO: Created: latency-svc-sxlsb +Aug 3 06:36:27.021: INFO: Got endpoints: latency-svc-sxlsb [193.199142ms] +Aug 3 06:36:27.026: INFO: Created: latency-svc-kkcb9 +Aug 3 06:36:27.037: INFO: Got endpoints: latency-svc-kkcb9 [208.80029ms] +Aug 3 06:36:27.044: INFO: Created: latency-svc-md5xg +Aug 3 06:36:27.061: INFO: Got endpoints: latency-svc-md5xg [232.928383ms] +Aug 3 06:36:27.062: INFO: Created: latency-svc-n44mz +Aug 3 06:36:27.081: INFO: Created: latency-svc-tvt7s +Aug 3 06:36:27.081: INFO: Got endpoints: latency-svc-n44mz [252.860597ms] +Aug 3 06:36:27.115: INFO: Got endpoints: latency-svc-tvt7s [258.856626ms] +Aug 3 06:36:27.117: INFO: Created: latency-svc-c6rg4 +Aug 3 06:36:27.130: INFO: Got endpoints: latency-svc-c6rg4 [273.637571ms] +Aug 3 06:36:27.140: INFO: Created: latency-svc-5rz6l +Aug 3 06:36:27.162: INFO: Got endpoints: latency-svc-5rz6l [305.835168ms] +Aug 3 06:36:27.174: INFO: Created: latency-svc-s8bdk +Aug 3 06:36:27.197: INFO: Created: latency-svc-9gvqs +Aug 3 06:36:27.214: INFO: Got endpoints: latency-svc-s8bdk [349.067517ms] +Aug 3 06:36:27.241: INFO: Created: latency-svc-4x4dm +Aug 3 06:36:27.257: INFO: Created: latency-svc-gvsz2 +Aug 3 06:36:27.259: INFO: Got endpoints: latency-svc-9gvqs [355.81278ms] +Aug 3 06:36:27.315: INFO: Got endpoints: latency-svc-4x4dm [411.122241ms] +Aug 3 06:36:27.317: INFO: Created: latency-svc-sbv9c +Aug 3 06:36:27.353: INFO: Created: latency-svc-j8kdl +Aug 3 06:36:27.368: INFO: Got endpoints: latency-svc-gvsz2 [438.735501ms] +Aug 3 06:36:27.377: INFO: Created: latency-svc-csqzb +Aug 3 06:36:27.397: INFO: Created: latency-svc-fjv4l +Aug 3 06:36:27.410: INFO: Got endpoints: latency-svc-sbv9c [478.034555ms] +Aug 3 06:36:27.427: INFO: Created: latency-svc-bq4zz +Aug 3 06:36:27.440: INFO: Created: latency-svc-hspg9 +Aug 3 06:36:27.454: INFO: Got endpoints: latency-svc-j8kdl [508.90852ms] +Aug 3 06:36:27.460: INFO: Created: latency-svc-zcg8s +Aug 3 06:36:27.475: INFO: Created: latency-svc-96zwn +Aug 3 06:36:27.488: INFO: Created: latency-svc-fpzl7 +Aug 3 06:36:27.493: INFO: Created: latency-svc-mkg9d +Aug 3 06:36:27.501: INFO: Got endpoints: latency-svc-csqzb [534.469574ms] +Aug 3 06:36:27.505: INFO: Created: latency-svc-8pdpz +Aug 3 06:36:27.518: INFO: Created: latency-svc-p2b5f +Aug 3 06:36:27.539: INFO: Created: latency-svc-8qwn9 +Aug 3 06:36:27.548: INFO: Created: latency-svc-wjktm +Aug 3 06:36:27.555: INFO: Got endpoints: latency-svc-fjv4l [557.994043ms] +Aug 3 06:36:27.568: INFO: Created: latency-svc-8xlxr +Aug 3 06:36:27.590: INFO: Created: latency-svc-4m27l +Aug 3 06:36:27.606: INFO: Got endpoints: latency-svc-bq4zz [584.108527ms] +Aug 3 06:36:27.608: INFO: Created: latency-svc-mbs7m +Aug 3 06:36:27.624: INFO: Created: latency-svc-nkd72 +Aug 3 06:36:27.642: INFO: Created: latency-svc-m8jhw +Aug 3 06:36:27.650: INFO: Got endpoints: latency-svc-hspg9 [612.99474ms] +Aug 3 06:36:27.651: INFO: Created: latency-svc-64wwc +Aug 3 06:36:27.672: INFO: Created: latency-svc-fjfdd +Aug 3 06:36:27.700: INFO: Got endpoints: latency-svc-zcg8s [639.322404ms] +Aug 3 06:36:27.716: INFO: Created: latency-svc-hl7zg +Aug 3 06:36:27.755: INFO: Got endpoints: latency-svc-96zwn [674.22019ms] +Aug 3 06:36:27.775: INFO: Created: latency-svc-58cdk +Aug 3 06:36:27.809: INFO: Got endpoints: latency-svc-fpzl7 [693.920549ms] +Aug 3 06:36:27.837: INFO: Created: latency-svc-95v4g +Aug 3 06:36:27.857: INFO: Got endpoints: latency-svc-mkg9d [726.991668ms] +Aug 3 06:36:27.881: INFO: Created: latency-svc-5pzlv +Aug 3 06:36:27.903: INFO: Got endpoints: latency-svc-8pdpz [741.021103ms] +Aug 3 06:36:27.924: INFO: Created: latency-svc-rbzhk +Aug 3 06:36:27.956: INFO: Got endpoints: latency-svc-p2b5f [741.718312ms] +Aug 3 06:36:27.972: INFO: Created: latency-svc-g66rl +Aug 3 06:36:28.001: INFO: Got endpoints: latency-svc-8qwn9 [741.408372ms] +Aug 3 06:36:28.023: INFO: Created: latency-svc-5j27t +Aug 3 06:36:28.053: INFO: Got endpoints: latency-svc-wjktm [737.736812ms] +Aug 3 06:36:28.142: INFO: Created: latency-svc-jjtdc +Aug 3 06:36:28.147: INFO: Got endpoints: latency-svc-8xlxr [779.380866ms] +Aug 3 06:36:28.158: INFO: Got endpoints: latency-svc-4m27l [747.593839ms] +Aug 3 06:36:28.176: INFO: Created: latency-svc-scxv4 +Aug 3 06:36:28.184: INFO: Created: latency-svc-fv8wf +Aug 3 06:36:28.203: INFO: Got endpoints: latency-svc-mbs7m [749.379601ms] +Aug 3 06:36:28.433: INFO: Got endpoints: latency-svc-m8jhw [878.088932ms] +Aug 3 06:36:28.433: INFO: Got endpoints: latency-svc-nkd72 [932.465496ms] +Aug 3 06:36:28.435: INFO: Got endpoints: latency-svc-fjfdd [785.494363ms] +Aug 3 06:36:28.436: INFO: Got endpoints: latency-svc-64wwc [830.103089ms] +Aug 3 06:36:28.440: INFO: Created: latency-svc-bgshk +Aug 3 06:36:30.223: INFO: Got endpoints: latency-svc-hl7zg [2.52249731s] +Aug 3 06:36:30.231: INFO: Got endpoints: latency-svc-5pzlv [2.373907467s] +Aug 3 06:36:30.231: INFO: Got endpoints: latency-svc-95v4g [2.421962286s] +Aug 3 06:36:30.231: INFO: Got endpoints: latency-svc-58cdk [2.475667274s] +Aug 3 06:36:30.234: INFO: Got endpoints: latency-svc-rbzhk [2.330736455s] +Aug 3 06:36:30.249: INFO: Created: latency-svc-gw7p2 +Aug 3 06:36:30.250: INFO: Got endpoints: latency-svc-g66rl [2.293650689s] +Aug 3 06:36:30.283: INFO: Got endpoints: latency-svc-scxv4 [2.135838964s] +Aug 3 06:36:30.283: INFO: Got endpoints: latency-svc-5j27t [2.281963034s] +Aug 3 06:36:30.284: INFO: Got endpoints: latency-svc-jjtdc [2.23048719s] +Aug 3 06:36:30.284: INFO: Got endpoints: latency-svc-bgshk [2.080522552s] +Aug 3 06:36:30.284: INFO: Got endpoints: latency-svc-fv8wf [2.125680789s] +Aug 3 06:36:30.284: INFO: Created: latency-svc-kdtzl +Aug 3 06:36:30.300: INFO: Got endpoints: latency-svc-gw7p2 [1.866921154s] +Aug 3 06:36:30.302: INFO: Created: latency-svc-tbwxr +Aug 3 06:36:30.319: INFO: Got endpoints: latency-svc-kdtzl [1.885846979s] +Aug 3 06:36:30.325: INFO: Got endpoints: latency-svc-tbwxr [1.889750749s] +Aug 3 06:36:30.335: INFO: Created: latency-svc-7f2wj +Aug 3 06:36:30.346: INFO: Got endpoints: latency-svc-7f2wj [1.909721118s] +Aug 3 06:36:30.358: INFO: Created: latency-svc-l7xwn +Aug 3 06:36:30.365: INFO: Got endpoints: latency-svc-l7xwn [142.016231ms] +Aug 3 06:36:30.367: INFO: Created: latency-svc-h8mzb +Aug 3 06:36:30.382: INFO: Got endpoints: latency-svc-h8mzb [151.128349ms] +Aug 3 06:36:30.384: INFO: Created: latency-svc-svvr4 +Aug 3 06:36:30.391: INFO: Got endpoints: latency-svc-svvr4 [159.977069ms] +Aug 3 06:36:30.394: INFO: Created: latency-svc-n9764 +Aug 3 06:36:30.406: INFO: Got endpoints: latency-svc-n9764 [175.31842ms] +Aug 3 06:36:30.419: INFO: Created: latency-svc-ftppt +Aug 3 06:36:30.430: INFO: Created: latency-svc-8fclh +Aug 3 06:36:30.436: INFO: Got endpoints: latency-svc-ftppt [201.864813ms] +Aug 3 06:36:30.442: INFO: Created: latency-svc-lqz7t +Aug 3 06:36:30.444: INFO: Got endpoints: latency-svc-8fclh [193.98306ms] +Aug 3 06:36:30.455: INFO: Got endpoints: latency-svc-lqz7t [171.478168ms] +Aug 3 06:36:30.455: INFO: Created: latency-svc-jfgms +Aug 3 06:36:30.464: INFO: Got endpoints: latency-svc-jfgms [179.93086ms] +Aug 3 06:36:30.470: INFO: Created: latency-svc-pzd7c +Aug 3 06:36:30.490: INFO: Got endpoints: latency-svc-pzd7c [206.762594ms] +Aug 3 06:36:30.503: INFO: Created: latency-svc-qnd4x +Aug 3 06:36:30.511: INFO: Created: latency-svc-5mb5z +Aug 3 06:36:30.511: INFO: Got endpoints: latency-svc-qnd4x [227.376737ms] +Aug 3 06:36:30.573: INFO: Got endpoints: latency-svc-5mb5z [272.632515ms] +Aug 3 06:36:30.578: INFO: Created: latency-svc-hgghw +Aug 3 06:36:30.588: INFO: Created: latency-svc-bbz9g +Aug 3 06:36:30.595: INFO: Got endpoints: latency-svc-hgghw [311.549751ms] +Aug 3 06:36:30.603: INFO: Got endpoints: latency-svc-bbz9g [283.746043ms] +Aug 3 06:36:30.609: INFO: Created: latency-svc-r82dw +Aug 3 06:36:30.623: INFO: Got endpoints: latency-svc-r82dw [297.40271ms] +Aug 3 06:36:30.625: INFO: Created: latency-svc-rhbw7 +Aug 3 06:36:30.639: INFO: Got endpoints: latency-svc-rhbw7 [293.036426ms] +Aug 3 06:36:30.647: INFO: Created: latency-svc-q6dc9 +Aug 3 06:36:30.657: INFO: Got endpoints: latency-svc-q6dc9 [291.70002ms] +Aug 3 06:36:30.662: INFO: Created: latency-svc-cm48j +Aug 3 06:36:30.674: INFO: Got endpoints: latency-svc-cm48j [292.39727ms] +Aug 3 06:36:30.679: INFO: Created: latency-svc-6jzbf +Aug 3 06:36:30.687: INFO: Got endpoints: latency-svc-6jzbf [295.779714ms] +Aug 3 06:36:30.691: INFO: Created: latency-svc-ldbxb +Aug 3 06:36:30.700: INFO: Got endpoints: latency-svc-ldbxb [293.921719ms] +Aug 3 06:36:30.705: INFO: Created: latency-svc-ktv5t +Aug 3 06:36:30.713: INFO: Got endpoints: latency-svc-ktv5t [277.242701ms] +Aug 3 06:36:30.715: INFO: Created: latency-svc-z9c8n +Aug 3 06:36:30.731: INFO: Got endpoints: latency-svc-z9c8n [287.196323ms] +Aug 3 06:36:30.739: INFO: Created: latency-svc-g824s +Aug 3 06:36:30.752: INFO: Got endpoints: latency-svc-g824s [296.972488ms] +Aug 3 06:36:30.760: INFO: Created: latency-svc-hhnnl +Aug 3 06:36:30.770: INFO: Got endpoints: latency-svc-hhnnl [305.918558ms] +Aug 3 06:36:30.774: INFO: Created: latency-svc-x22bb +Aug 3 06:36:30.782: INFO: Got endpoints: latency-svc-x22bb [290.972105ms] +Aug 3 06:36:30.788: INFO: Created: latency-svc-w7x76 +Aug 3 06:36:30.799: INFO: Got endpoints: latency-svc-w7x76 [287.417818ms] +Aug 3 06:36:30.803: INFO: Created: latency-svc-9f5gv +Aug 3 06:36:30.814: INFO: Got endpoints: latency-svc-9f5gv [240.927007ms] +Aug 3 06:36:30.818: INFO: Created: latency-svc-bmrxr +Aug 3 06:36:30.827: INFO: Created: latency-svc-vw6ln +Aug 3 06:36:30.827: INFO: Got endpoints: latency-svc-bmrxr [232.353509ms] +Aug 3 06:36:30.834: INFO: Got endpoints: latency-svc-vw6ln [230.97792ms] +Aug 3 06:36:30.842: INFO: Created: latency-svc-ks2mw +Aug 3 06:36:30.845: INFO: Got endpoints: latency-svc-ks2mw [222.684594ms] +Aug 3 06:36:30.956: INFO: Created: latency-svc-xk7j5 +Aug 3 06:36:30.961: INFO: Created: latency-svc-kddfx +Aug 3 06:36:30.967: INFO: Created: latency-svc-frdpd +Aug 3 06:36:30.969: INFO: Created: latency-svc-crbv9 +Aug 3 06:36:30.969: INFO: Created: latency-svc-qfx2r +Aug 3 06:36:30.970: INFO: Created: latency-svc-df8jm +Aug 3 06:36:30.970: INFO: Created: latency-svc-6vn65 +Aug 3 06:36:30.970: INFO: Created: latency-svc-qhgwx +Aug 3 06:36:30.970: INFO: Created: latency-svc-f8q85 +Aug 3 06:36:30.970: INFO: Created: latency-svc-hmfvf +Aug 3 06:36:30.970: INFO: Created: latency-svc-tshd7 +Aug 3 06:36:30.970: INFO: Created: latency-svc-g244w +Aug 3 06:36:30.970: INFO: Created: latency-svc-2wnzx +Aug 3 06:36:30.971: INFO: Created: latency-svc-f8g98 +Aug 3 06:36:30.971: INFO: Created: latency-svc-sbzq8 +Aug 3 06:36:31.034: INFO: Got endpoints: latency-svc-xk7j5 [333.444461ms] +Aug 3 06:36:31.034: INFO: Got endpoints: latency-svc-f8g98 [264.464035ms] +Aug 3 06:36:31.034: INFO: Got endpoints: latency-svc-crbv9 [395.441603ms] +Aug 3 06:36:31.034: INFO: Got endpoints: latency-svc-2wnzx [321.248937ms] +Aug 3 06:36:31.034: INFO: Got endpoints: latency-svc-frdpd [200.276872ms] +Aug 3 06:36:31.042: INFO: Got endpoints: latency-svc-qfx2r [243.358444ms] +Aug 3 06:36:31.045: INFO: Got endpoints: latency-svc-6vn65 [314.51622ms] +Aug 3 06:36:31.054: INFO: Created: latency-svc-d47p7 +Aug 3 06:36:31.068: INFO: Created: latency-svc-4prnm +Aug 3 06:36:31.075: INFO: Created: latency-svc-d928v +Aug 3 06:36:31.090: INFO: Got endpoints: latency-svc-kddfx [308.069947ms] +Aug 3 06:36:31.092: INFO: Created: latency-svc-8g7hb +Aug 3 06:36:31.100: INFO: Created: latency-svc-knvqh +Aug 3 06:36:31.107: INFO: Created: latency-svc-pl6z2 +Aug 3 06:36:31.115: INFO: Created: latency-svc-r4zgn +Aug 3 06:36:31.126: INFO: Created: latency-svc-kjp4s +Aug 3 06:36:31.136: INFO: Got endpoints: latency-svc-df8jm [479.38596ms] +Aug 3 06:36:31.154: INFO: Created: latency-svc-hxrg2 +Aug 3 06:36:31.185: INFO: Got endpoints: latency-svc-f8q85 [370.750915ms] +Aug 3 06:36:31.200: INFO: Created: latency-svc-2rfdx +Aug 3 06:36:31.232: INFO: Got endpoints: latency-svc-qhgwx [405.161458ms] +Aug 3 06:36:31.247: INFO: Created: latency-svc-x7qvt +Aug 3 06:36:31.285: INFO: Got endpoints: latency-svc-hmfvf [439.234818ms] +Aug 3 06:36:31.303: INFO: Created: latency-svc-9gjs9 +Aug 3 06:36:31.332: INFO: Got endpoints: latency-svc-g244w [645.445705ms] +Aug 3 06:36:31.350: INFO: Created: latency-svc-7gwbc +Aug 3 06:36:31.381: INFO: Got endpoints: latency-svc-tshd7 [706.747891ms] +Aug 3 06:36:31.396: INFO: Created: latency-svc-bmvk5 +Aug 3 06:36:31.431: INFO: Got endpoints: latency-svc-sbzq8 [679.037831ms] +Aug 3 06:36:31.464: INFO: Created: latency-svc-zqs9m +Aug 3 06:36:31.487: INFO: Got endpoints: latency-svc-d47p7 [452.726176ms] +Aug 3 06:36:31.513: INFO: Created: latency-svc-pw9ll +Aug 3 06:36:31.537: INFO: Got endpoints: latency-svc-4prnm [503.417751ms] +Aug 3 06:36:31.560: INFO: Created: latency-svc-xpl4s +Aug 3 06:36:31.587: INFO: Got endpoints: latency-svc-d928v [552.326248ms] +Aug 3 06:36:31.616: INFO: Created: latency-svc-w55jh +Aug 3 06:36:31.644: INFO: Got endpoints: latency-svc-8g7hb [609.567692ms] +Aug 3 06:36:31.677: INFO: Created: latency-svc-vbnpb +Aug 3 06:36:31.700: INFO: Got endpoints: latency-svc-knvqh [665.456945ms] +Aug 3 06:36:31.722: INFO: Created: latency-svc-sgcwc +Aug 3 06:36:31.736: INFO: Got endpoints: latency-svc-pl6z2 [693.587628ms] +Aug 3 06:36:31.764: INFO: Created: latency-svc-258vz +Aug 3 06:36:31.792: INFO: Got endpoints: latency-svc-r4zgn [746.770541ms] +Aug 3 06:36:31.835: INFO: Got endpoints: latency-svc-kjp4s [745.225803ms] +Aug 3 06:36:31.835: INFO: Created: latency-svc-c56f9 +Aug 3 06:36:31.854: INFO: Created: latency-svc-vvpbn +Aug 3 06:36:31.888: INFO: Got endpoints: latency-svc-hxrg2 [751.650202ms] +Aug 3 06:36:31.909: INFO: Created: latency-svc-fxn24 +Aug 3 06:36:31.934: INFO: Got endpoints: latency-svc-2rfdx [748.679163ms] +Aug 3 06:36:31.951: INFO: Created: latency-svc-zjn4x +Aug 3 06:36:31.985: INFO: Got endpoints: latency-svc-x7qvt [752.146719ms] +Aug 3 06:36:32.005: INFO: Created: latency-svc-vvrlz +Aug 3 06:36:32.033: INFO: Got endpoints: latency-svc-9gjs9 [748.127846ms] +Aug 3 06:36:32.054: INFO: Created: latency-svc-qk7mn +Aug 3 06:36:32.101: INFO: Got endpoints: latency-svc-7gwbc [768.623738ms] +Aug 3 06:36:32.121: INFO: Created: latency-svc-2vtw6 +Aug 3 06:36:32.132: INFO: Got endpoints: latency-svc-bmvk5 [751.073213ms] +Aug 3 06:36:32.155: INFO: Created: latency-svc-4f4dg +Aug 3 06:36:32.185: INFO: Got endpoints: latency-svc-zqs9m [753.287754ms] +Aug 3 06:36:32.207: INFO: Created: latency-svc-xzmw5 +Aug 3 06:36:32.233: INFO: Got endpoints: latency-svc-pw9ll [746.187809ms] +Aug 3 06:36:32.254: INFO: Created: latency-svc-nz9lz +Aug 3 06:36:32.281: INFO: Got endpoints: latency-svc-xpl4s [743.367464ms] +Aug 3 06:36:32.302: INFO: Created: latency-svc-bqmqr +Aug 3 06:36:32.335: INFO: Got endpoints: latency-svc-w55jh [748.072995ms] +Aug 3 06:36:32.350: INFO: Created: latency-svc-7vcpx +Aug 3 06:36:32.385: INFO: Got endpoints: latency-svc-vbnpb [741.197554ms] +Aug 3 06:36:32.403: INFO: Created: latency-svc-8qndl +Aug 3 06:36:32.433: INFO: Got endpoints: latency-svc-sgcwc [733.053035ms] +Aug 3 06:36:32.450: INFO: Created: latency-svc-2v2fl +Aug 3 06:36:32.485: INFO: Got endpoints: latency-svc-258vz [749.512104ms] +Aug 3 06:36:32.547: INFO: Got endpoints: latency-svc-c56f9 [754.900876ms] +Aug 3 06:36:32.557: INFO: Created: latency-svc-p2rqj +Aug 3 06:36:32.588: INFO: Got endpoints: latency-svc-vvpbn [752.95048ms] +Aug 3 06:36:32.589: INFO: Created: latency-svc-dnb2m +Aug 3 06:36:32.607: INFO: Created: latency-svc-djhcz +Aug 3 06:36:32.639: INFO: Got endpoints: latency-svc-fxn24 [750.949024ms] +Aug 3 06:36:32.659: INFO: Created: latency-svc-4xdml +Aug 3 06:36:32.687: INFO: Got endpoints: latency-svc-zjn4x [752.765098ms] +Aug 3 06:36:32.706: INFO: Created: latency-svc-x4mqg +Aug 3 06:36:32.732: INFO: Got endpoints: latency-svc-vvrlz [747.56942ms] +Aug 3 06:36:32.763: INFO: Created: latency-svc-6f486 +Aug 3 06:36:32.789: INFO: Got endpoints: latency-svc-qk7mn [755.662195ms] +Aug 3 06:36:32.813: INFO: Created: latency-svc-5cvvj +Aug 3 06:36:32.835: INFO: Got endpoints: latency-svc-2vtw6 [734.315362ms] +Aug 3 06:36:32.859: INFO: Created: latency-svc-trg9s +Aug 3 06:36:32.884: INFO: Got endpoints: latency-svc-4f4dg [751.805608ms] +Aug 3 06:36:32.907: INFO: Created: latency-svc-t6dzv +Aug 3 06:36:32.933: INFO: Got endpoints: latency-svc-xzmw5 [748.655174ms] +Aug 3 06:36:32.956: INFO: Created: latency-svc-gp7rp +Aug 3 06:36:32.996: INFO: Got endpoints: latency-svc-nz9lz [763.014939ms] +Aug 3 06:36:33.015: INFO: Created: latency-svc-j8sw9 +Aug 3 06:36:33.037: INFO: Got endpoints: latency-svc-bqmqr [756.179867ms] +Aug 3 06:36:33.085: INFO: Got endpoints: latency-svc-7vcpx [750.066051ms] +Aug 3 06:36:33.133: INFO: Got endpoints: latency-svc-8qndl [747.602718ms] +Aug 3 06:36:33.183: INFO: Got endpoints: latency-svc-2v2fl [749.912772ms] +Aug 3 06:36:33.234: INFO: Got endpoints: latency-svc-p2rqj [748.112839ms] +Aug 3 06:36:33.284: INFO: Got endpoints: latency-svc-dnb2m [736.839159ms] +Aug 3 06:36:33.332: INFO: Got endpoints: latency-svc-djhcz [743.759317ms] +Aug 3 06:36:33.383: INFO: Got endpoints: latency-svc-4xdml [743.714669ms] +Aug 3 06:36:33.436: INFO: Got endpoints: latency-svc-x4mqg [748.761231ms] +Aug 3 06:36:33.484: INFO: Got endpoints: latency-svc-6f486 [752.12145ms] +Aug 3 06:36:33.535: INFO: Got endpoints: latency-svc-5cvvj [746.275701ms] +Aug 3 06:36:33.583: INFO: Got endpoints: latency-svc-trg9s [747.5572ms] +Aug 3 06:36:33.640: INFO: Got endpoints: latency-svc-t6dzv [755.531156ms] +Aug 3 06:36:33.684: INFO: Got endpoints: latency-svc-gp7rp [750.611442ms] +Aug 3 06:36:33.733: INFO: Got endpoints: latency-svc-j8sw9 [736.51826ms] +Aug 3 06:36:33.733: INFO: Latencies: [35.322038ms 57.881398ms 72.080302ms 111.48463ms 131.264645ms 142.016231ms 144.359365ms 151.128349ms 159.977069ms 166.108428ms 169.546369ms 171.478168ms 171.544851ms 174.314305ms 175.31842ms 179.93086ms 186.211612ms 193.199142ms 193.98306ms 200.276872ms 201.864813ms 206.762594ms 208.80029ms 221.406434ms 222.684594ms 227.376737ms 230.97792ms 232.353509ms 232.928383ms 240.927007ms 243.358444ms 252.860597ms 258.856626ms 264.464035ms 272.632515ms 273.637571ms 277.242701ms 280.998998ms 283.746043ms 287.196323ms 287.417818ms 290.972105ms 291.70002ms 292.39727ms 293.036426ms 293.921719ms 295.779714ms 296.972488ms 297.40271ms 301.742629ms 305.484913ms 305.835168ms 305.918558ms 308.069947ms 311.549751ms 314.51622ms 321.248937ms 333.444461ms 334.369306ms 349.067517ms 355.81278ms 356.113709ms 370.750915ms 387.68148ms 395.441603ms 405.161458ms 408.59383ms 411.122241ms 414.708154ms 417.361036ms 421.035927ms 421.09394ms 422.183004ms 422.63894ms 426.724462ms 428.906255ms 429.587075ms 430.075081ms 438.735501ms 439.234818ms 440.754358ms 442.85618ms 449.745504ms 452.726176ms 455.016838ms 465.311062ms 473.720946ms 478.034555ms 479.38596ms 479.922941ms 487.478402ms 490.57766ms 491.421602ms 499.130998ms 502.38684ms 503.169322ms 503.417751ms 504.145339ms 504.318472ms 505.709577ms 508.90852ms 513.168023ms 513.588571ms 513.933676ms 515.748782ms 515.78174ms 518.962486ms 519.220555ms 528.104827ms 534.469574ms 540.533636ms 547.521247ms 552.326248ms 557.994043ms 566.707465ms 584.108527ms 587.958367ms 605.146034ms 609.567692ms 612.918949ms 612.99474ms 639.322404ms 645.445705ms 665.456945ms 674.22019ms 679.037831ms 693.587628ms 693.920549ms 699.91503ms 706.747891ms 726.991668ms 733.053035ms 734.315362ms 736.51826ms 736.839159ms 737.736812ms 741.021103ms 741.197554ms 741.408372ms 741.718312ms 743.367464ms 743.714669ms 743.759317ms 745.225803ms 746.187809ms 746.275701ms 746.770541ms 747.5572ms 747.56942ms 747.593839ms 747.602718ms 748.072995ms 748.112839ms 748.127846ms 748.655174ms 748.679163ms 748.761231ms 749.379601ms 749.512104ms 749.592627ms 749.912772ms 750.066051ms 750.611442ms 750.949024ms 751.073213ms 751.650202ms 751.805608ms 752.12145ms 752.146719ms 752.765098ms 752.95048ms 753.287754ms 754.900876ms 755.531156ms 755.662195ms 756.179867ms 763.014939ms 768.623738ms 779.380866ms 785.494363ms 830.103089ms 831.962742ms 841.217129ms 878.088932ms 932.465496ms 1.866921154s 1.885846979s 1.889750749s 1.909721118s 2.080522552s 2.125680789s 2.135838964s 2.23048719s 2.281963034s 2.293650689s 2.330736455s 2.373907467s 2.421962286s 2.475667274s 2.52249731s] +Aug 3 06:36:33.733: INFO: 50 %ile: 508.90852ms +Aug 3 06:36:33.733: INFO: 90 %ile: 830.103089ms +Aug 3 06:36:33.733: INFO: 99 %ile: 2.475667274s +Aug 3 06:36:33.733: INFO: Total sample count: 200 +[AfterEach] [sig-network] Service endpoints latency + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:36:33.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svc-latency-4705" for this suite. + +• [SLOW TEST:13.937 seconds] +[sig-network] Service endpoints latency +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":346,"completed":60,"skipped":1022,"failed":0} +SSSS +------------------------------ +[sig-node] Downward API + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:36:33.766: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Aug 3 06:36:33.874: INFO: Waiting up to 5m0s for pod "downward-api-04346b25-6cee-4739-a751-2e648b49c1a5" in namespace "downward-api-1180" to be "Succeeded or Failed" +Aug 3 06:36:33.886: INFO: Pod "downward-api-04346b25-6cee-4739-a751-2e648b49c1a5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.001552ms +Aug 3 06:36:35.900: INFO: Pod "downward-api-04346b25-6cee-4739-a751-2e648b49c1a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025673948s +Aug 3 06:36:37.912: INFO: Pod "downward-api-04346b25-6cee-4739-a751-2e648b49c1a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037583687s +STEP: Saw pod success +Aug 3 06:36:37.912: INFO: Pod "downward-api-04346b25-6cee-4739-a751-2e648b49c1a5" satisfied condition "Succeeded or Failed" +Aug 3 06:36:37.917: INFO: Trying to get logs from node dce-10-6-213-50 pod downward-api-04346b25-6cee-4739-a751-2e648b49c1a5 container dapi-container: +STEP: delete the pod +Aug 3 06:36:37.974: INFO: Waiting for pod downward-api-04346b25-6cee-4739-a751-2e648b49c1a5 to disappear +Aug 3 06:36:37.980: INFO: Pod downward-api-04346b25-6cee-4739-a751-2e648b49c1a5 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:36:37.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1180" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":346,"completed":61,"skipped":1026,"failed":0} +SSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:36:38.003: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test substitution in container's args +Aug 3 06:36:38.091: INFO: Waiting up to 5m0s for pod "var-expansion-f6a09b33-b632-4de7-8890-77245a5d9f6d" in namespace "var-expansion-5326" to be "Succeeded or Failed" +Aug 3 06:36:38.101: INFO: Pod "var-expansion-f6a09b33-b632-4de7-8890-77245a5d9f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020996ms +Aug 3 06:36:40.118: INFO: Pod "var-expansion-f6a09b33-b632-4de7-8890-77245a5d9f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026685519s +Aug 3 06:36:42.135: INFO: Pod "var-expansion-f6a09b33-b632-4de7-8890-77245a5d9f6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044053461s +STEP: Saw pod success +Aug 3 06:36:42.135: INFO: Pod "var-expansion-f6a09b33-b632-4de7-8890-77245a5d9f6d" satisfied condition "Succeeded or Failed" +Aug 3 06:36:42.144: INFO: Trying to get logs from node dce-10-6-213-50 pod var-expansion-f6a09b33-b632-4de7-8890-77245a5d9f6d container dapi-container: +STEP: delete the pod +Aug 3 06:36:42.215: INFO: Waiting for pod var-expansion-f6a09b33-b632-4de7-8890-77245a5d9f6d to disappear +Aug 3 06:36:42.231: INFO: Pod var-expansion-f6a09b33-b632-4de7-8890-77245a5d9f6d no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:36:42.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-5326" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":346,"completed":62,"skipped":1033,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should list and delete a collection of ReplicaSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:36:42.259: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +[It] should list and delete a collection of ReplicaSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create a ReplicaSet +STEP: Verify that the required pods have come up +Aug 3 06:36:42.374: INFO: Pod name sample-pod: Found 0 pods out of 3 +Aug 3 06:36:47.393: INFO: Pod name sample-pod: Found 3 pods out of 3 +STEP: ensuring each pod is running +Aug 3 06:36:49.421: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} +STEP: Listing all ReplicaSets +STEP: DeleteCollection of the ReplicaSets +STEP: After DeleteCollection verify that ReplicaSets have been deleted +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:36:49.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-3643" for this suite. + +• [SLOW TEST:7.226 seconds] +[sig-apps] ReplicaSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should list and delete a collection of ReplicaSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":346,"completed":63,"skipped":1105,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan pods created by rc if delete options say so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:36:49.485: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan pods created by rc if delete options say so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods +STEP: Gathering metrics +Aug 3 06:37:29.854: INFO: The status of Pod dce-kube-controller-manager-dce-10-6-213-30 is Running (Ready = true) +Aug 3 06:38:30.142: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. +Aug 3 06:38:30.142: INFO: Deleting pod "simpletest.rc-2gj5h" in namespace "gc-8368" +Aug 3 06:38:30.190: INFO: Deleting pod "simpletest.rc-2p68t" in namespace "gc-8368" +Aug 3 06:38:30.212: INFO: Deleting pod "simpletest.rc-2qk6v" in namespace "gc-8368" +Aug 3 06:38:30.237: INFO: Deleting pod "simpletest.rc-2qwd8" in namespace "gc-8368" +Aug 3 06:38:30.258: INFO: Deleting pod "simpletest.rc-2vtq4" in namespace "gc-8368" +Aug 3 06:38:30.313: INFO: Deleting pod "simpletest.rc-2vvgz" in namespace "gc-8368" +Aug 3 06:38:30.339: INFO: Deleting pod "simpletest.rc-45f7d" in namespace "gc-8368" +Aug 3 06:38:30.375: INFO: Deleting pod "simpletest.rc-49gkd" in namespace "gc-8368" +Aug 3 06:38:30.406: INFO: Deleting pod "simpletest.rc-4dhbv" in namespace "gc-8368" +Aug 3 06:38:30.440: INFO: Deleting pod "simpletest.rc-4dxhn" in namespace "gc-8368" +Aug 3 06:38:30.480: INFO: Deleting pod "simpletest.rc-4krbp" in namespace "gc-8368" +Aug 3 06:38:30.528: INFO: Deleting pod "simpletest.rc-4x2hm" in namespace "gc-8368" +Aug 3 06:38:30.566: INFO: Deleting pod "simpletest.rc-52mwm" in namespace "gc-8368" +Aug 3 06:38:30.594: INFO: Deleting pod "simpletest.rc-5cqgj" in namespace "gc-8368" +Aug 3 06:38:30.640: INFO: Deleting pod "simpletest.rc-5jwf4" in namespace "gc-8368" +Aug 3 06:38:30.672: INFO: Deleting pod "simpletest.rc-5pzxd" in namespace "gc-8368" +Aug 3 06:38:30.736: INFO: Deleting pod "simpletest.rc-5rmgc" in namespace "gc-8368" +Aug 3 06:38:30.773: INFO: Deleting pod "simpletest.rc-66c4h" in namespace "gc-8368" +Aug 3 06:38:30.823: INFO: Deleting pod "simpletest.rc-6rc6j" in namespace "gc-8368" +Aug 3 06:38:30.864: INFO: Deleting pod "simpletest.rc-6x87c" in namespace "gc-8368" +Aug 3 06:38:30.913: INFO: Deleting pod "simpletest.rc-7kmxg" in namespace "gc-8368" +Aug 3 06:38:30.976: INFO: Deleting pod "simpletest.rc-8cr8n" in namespace "gc-8368" +Aug 3 06:38:31.007: INFO: Deleting pod "simpletest.rc-8trwk" in namespace "gc-8368" +Aug 3 06:38:31.068: INFO: Deleting pod "simpletest.rc-98hg4" in namespace "gc-8368" +Aug 3 06:38:31.126: INFO: Deleting pod "simpletest.rc-9kdb9" in namespace "gc-8368" +Aug 3 06:38:31.189: INFO: Deleting pod "simpletest.rc-b6dmd" in namespace "gc-8368" +Aug 3 06:38:31.255: INFO: Deleting pod "simpletest.rc-bcgk6" in namespace "gc-8368" +Aug 3 06:38:31.326: INFO: Deleting pod "simpletest.rc-bm22n" in namespace "gc-8368" +Aug 3 06:38:31.367: INFO: Deleting pod "simpletest.rc-cb8kt" in namespace "gc-8368" +Aug 3 06:38:31.393: INFO: Deleting pod "simpletest.rc-ccqzf" in namespace "gc-8368" +Aug 3 06:38:31.439: INFO: Deleting pod "simpletest.rc-d6gqj" in namespace "gc-8368" +Aug 3 06:38:31.465: INFO: Deleting pod "simpletest.rc-dnz6j" in namespace "gc-8368" +Aug 3 06:38:31.513: INFO: Deleting pod "simpletest.rc-dvqmv" in namespace "gc-8368" +Aug 3 06:38:31.593: INFO: Deleting pod "simpletest.rc-dzhbn" in namespace "gc-8368" +Aug 3 06:38:31.689: INFO: Deleting pod "simpletest.rc-f27nk" in namespace "gc-8368" +Aug 3 06:38:31.781: INFO: Deleting pod "simpletest.rc-f949r" in namespace "gc-8368" +Aug 3 06:38:31.843: INFO: Deleting pod "simpletest.rc-fll8h" in namespace "gc-8368" +Aug 3 06:38:31.898: INFO: Deleting pod "simpletest.rc-fx8b6" in namespace "gc-8368" +Aug 3 06:38:31.958: INFO: Deleting pod "simpletest.rc-fzkqv" in namespace "gc-8368" +Aug 3 06:38:31.989: INFO: Deleting pod "simpletest.rc-gh8rn" in namespace "gc-8368" +Aug 3 06:38:32.049: INFO: Deleting pod "simpletest.rc-ghcr5" in namespace "gc-8368" +Aug 3 06:38:32.130: INFO: Deleting pod "simpletest.rc-gpxqh" in namespace "gc-8368" +Aug 3 06:38:32.173: INFO: Deleting pod "simpletest.rc-gspwc" in namespace "gc-8368" +Aug 3 06:38:32.215: INFO: Deleting pod "simpletest.rc-h2fmk" in namespace "gc-8368" +Aug 3 06:38:32.335: INFO: Deleting pod "simpletest.rc-h2jp4" in namespace "gc-8368" +Aug 3 06:38:32.395: INFO: Deleting pod "simpletest.rc-j7cn8" in namespace "gc-8368" +Aug 3 06:38:32.473: INFO: Deleting pod "simpletest.rc-k5rvc" in namespace "gc-8368" +Aug 3 06:38:32.549: INFO: Deleting pod "simpletest.rc-ktfdm" in namespace "gc-8368" +Aug 3 06:38:32.601: INFO: Deleting pod "simpletest.rc-kvmd8" in namespace "gc-8368" +Aug 3 06:38:32.669: INFO: Deleting pod "simpletest.rc-l2n4r" in namespace "gc-8368" +Aug 3 06:38:32.721: INFO: Deleting pod "simpletest.rc-lbnqv" in namespace "gc-8368" +Aug 3 06:38:32.791: INFO: Deleting pod "simpletest.rc-lgx76" in namespace "gc-8368" +Aug 3 06:38:32.835: INFO: Deleting pod "simpletest.rc-m9p47" in namespace "gc-8368" +Aug 3 06:38:32.888: INFO: Deleting pod "simpletest.rc-mbpmj" in namespace "gc-8368" +Aug 3 06:38:32.956: INFO: Deleting pod "simpletest.rc-mgmdj" in namespace "gc-8368" +Aug 3 06:38:33.008: INFO: Deleting pod "simpletest.rc-mknp8" in namespace "gc-8368" +Aug 3 06:38:33.044: INFO: Deleting pod "simpletest.rc-mmmdt" in namespace "gc-8368" +Aug 3 06:38:33.096: INFO: Deleting pod "simpletest.rc-msbcl" in namespace "gc-8368" +Aug 3 06:38:33.147: INFO: Deleting pod "simpletest.rc-mt74c" in namespace "gc-8368" +Aug 3 06:38:33.214: INFO: Deleting pod "simpletest.rc-mx4t5" in namespace "gc-8368" +Aug 3 06:38:33.298: INFO: Deleting pod "simpletest.rc-mx8z7" in namespace "gc-8368" +Aug 3 06:38:33.323: INFO: Deleting pod "simpletest.rc-n5mbv" in namespace "gc-8368" +Aug 3 06:38:33.372: INFO: Deleting pod "simpletest.rc-nqzdv" in namespace "gc-8368" +Aug 3 06:38:33.423: INFO: Deleting pod "simpletest.rc-nw596" in namespace "gc-8368" +Aug 3 06:38:33.452: INFO: Deleting pod "simpletest.rc-nwwbf" in namespace "gc-8368" +Aug 3 06:38:33.500: INFO: Deleting pod "simpletest.rc-ph6xx" in namespace "gc-8368" +Aug 3 06:38:33.553: INFO: Deleting pod "simpletest.rc-psl9z" in namespace "gc-8368" +Aug 3 06:38:33.605: INFO: Deleting pod "simpletest.rc-pv2br" in namespace "gc-8368" +Aug 3 06:38:33.651: INFO: Deleting pod "simpletest.rc-q6bxf" in namespace "gc-8368" +Aug 3 06:38:33.715: INFO: Deleting pod "simpletest.rc-qcckx" in namespace "gc-8368" +Aug 3 06:38:33.755: INFO: Deleting pod "simpletest.rc-qcstk" in namespace "gc-8368" +Aug 3 06:38:33.826: INFO: Deleting pod "simpletest.rc-qh2hg" in namespace "gc-8368" +Aug 3 06:38:33.894: INFO: Deleting pod "simpletest.rc-qzcp6" in namespace "gc-8368" +Aug 3 06:38:33.925: INFO: Deleting pod "simpletest.rc-r8plz" in namespace "gc-8368" +Aug 3 06:38:33.973: INFO: Deleting pod "simpletest.rc-rkp75" in namespace "gc-8368" +Aug 3 06:38:34.022: INFO: Deleting pod "simpletest.rc-rprk6" in namespace "gc-8368" +Aug 3 06:38:34.060: INFO: Deleting pod "simpletest.rc-s8wxv" in namespace "gc-8368" +Aug 3 06:38:34.107: INFO: Deleting pod "simpletest.rc-sgmw7" in namespace "gc-8368" +Aug 3 06:38:34.158: INFO: Deleting pod "simpletest.rc-smhfm" in namespace "gc-8368" +Aug 3 06:38:34.230: INFO: Deleting pod "simpletest.rc-sptm8" in namespace "gc-8368" +Aug 3 06:38:34.292: INFO: Deleting pod "simpletest.rc-t76d8" in namespace "gc-8368" +Aug 3 06:38:34.365: INFO: Deleting pod "simpletest.rc-tfddh" in namespace "gc-8368" +Aug 3 06:38:34.437: INFO: Deleting pod "simpletest.rc-thw69" in namespace "gc-8368" +Aug 3 06:38:34.474: INFO: Deleting pod "simpletest.rc-vjjnk" in namespace "gc-8368" +Aug 3 06:38:34.531: INFO: Deleting pod "simpletest.rc-w552q" in namespace "gc-8368" +Aug 3 06:38:34.595: INFO: Deleting pod "simpletest.rc-w877t" in namespace "gc-8368" +Aug 3 06:38:34.691: INFO: Deleting pod "simpletest.rc-wlhw5" in namespace "gc-8368" +Aug 3 06:38:34.767: INFO: Deleting pod "simpletest.rc-wn846" in namespace "gc-8368" +Aug 3 06:38:34.837: INFO: Deleting pod "simpletest.rc-wsznw" in namespace "gc-8368" +Aug 3 06:38:34.896: INFO: Deleting pod "simpletest.rc-wwkfr" in namespace "gc-8368" +Aug 3 06:38:34.949: INFO: Deleting pod "simpletest.rc-x22gn" in namespace "gc-8368" +Aug 3 06:38:34.997: INFO: Deleting pod "simpletest.rc-x2rnl" in namespace "gc-8368" +Aug 3 06:38:35.061: INFO: Deleting pod "simpletest.rc-x9svm" in namespace "gc-8368" +Aug 3 06:38:35.125: INFO: Deleting pod "simpletest.rc-xcgjw" in namespace "gc-8368" +Aug 3 06:38:35.197: INFO: Deleting pod "simpletest.rc-xr5s4" in namespace "gc-8368" +Aug 3 06:38:35.284: INFO: Deleting pod "simpletest.rc-xtjpv" in namespace "gc-8368" +Aug 3 06:38:35.396: INFO: Deleting pod "simpletest.rc-z8bcb" in namespace "gc-8368" +Aug 3 06:38:35.501: INFO: Deleting pod "simpletest.rc-zkmm7" in namespace "gc-8368" +Aug 3 06:38:35.616: INFO: Deleting pod "simpletest.rc-zkrzh" in namespace "gc-8368" +Aug 3 06:38:35.671: INFO: Deleting pod "simpletest.rc-znkh6" in namespace "gc-8368" +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:38:35.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-8368" for this suite. + +• [SLOW TEST:106.349 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should orphan pods created by rc if delete options say so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":346,"completed":64,"skipped":1109,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] Projected configMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:38:35.835: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with configMap that has name projected-configmap-test-upd-cd0a3e22-90ec-4184-b953-436c1ff2f2e9 +STEP: Creating the pod +Aug 3 06:38:36.464: INFO: The status of Pod pod-projected-configmaps-17332143-3562-4282-9c85-c6974e48e8bc is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:38:38.481: INFO: The status of Pod pod-projected-configmaps-17332143-3562-4282-9c85-c6974e48e8bc is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:38:40.493: INFO: The status of Pod pod-projected-configmaps-17332143-3562-4282-9c85-c6974e48e8bc is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:38:42.508: INFO: The status of Pod pod-projected-configmaps-17332143-3562-4282-9c85-c6974e48e8bc is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:38:44.545: INFO: The status of Pod pod-projected-configmaps-17332143-3562-4282-9c85-c6974e48e8bc is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:38:46.485: INFO: The status of Pod pod-projected-configmaps-17332143-3562-4282-9c85-c6974e48e8bc is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:38:48.504: INFO: The status of Pod pod-projected-configmaps-17332143-3562-4282-9c85-c6974e48e8bc is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:38:50.483: INFO: The status of Pod pod-projected-configmaps-17332143-3562-4282-9c85-c6974e48e8bc is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:38:52.501: INFO: The status of Pod pod-projected-configmaps-17332143-3562-4282-9c85-c6974e48e8bc is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:38:54.483: INFO: The status of Pod pod-projected-configmaps-17332143-3562-4282-9c85-c6974e48e8bc is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:38:56.472: INFO: The status of Pod pod-projected-configmaps-17332143-3562-4282-9c85-c6974e48e8bc is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:38:58.472: INFO: The status of Pod pod-projected-configmaps-17332143-3562-4282-9c85-c6974e48e8bc is Running (Ready = true) +STEP: Updating configmap projected-configmap-test-upd-cd0a3e22-90ec-4184-b953-436c1ff2f2e9 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:40:18.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8163" for this suite. + +• [SLOW TEST:102.331 seconds] +[sig-storage] Projected configMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":65,"skipped":1117,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should unconditionally reject operations on fail closed webhook [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:40:18.167: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 3 06:40:18.522: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Aug 3 06:40:20.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 6, 40, 18, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 6, 40, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 6, 40, 18, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 6, 40, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 3 06:40:23.605: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should unconditionally reject operations on fail closed webhook [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API +STEP: create a namespace for the webhook +STEP: create a configmap should be unconditionally rejected by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:40:23.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-6642" for this suite. +STEP: Destroying namespace "webhook-6642-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:5.860 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should unconditionally reject operations on fail closed webhook [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":346,"completed":66,"skipped":1135,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox Pod with hostAliases + should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:40:24.028: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 06:40:24.116: INFO: The status of Pod busybox-host-aliases6b6d33cb-f163-4d6e-8e31-d8d89a7f0bfa is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:40:26.133: INFO: The status of Pod busybox-host-aliases6b6d33cb-f163-4d6e-8e31-d8d89a7f0bfa is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:40:28.128: INFO: The status of Pod busybox-host-aliases6b6d33cb-f163-4d6e-8e31-d8d89a7f0bfa is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:40:30.128: INFO: The status of Pod busybox-host-aliases6b6d33cb-f163-4d6e-8e31-d8d89a7f0bfa is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:40:30.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-5221" for this suite. + +• [SLOW TEST:6.134 seconds] +[sig-node] Kubelet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + when scheduling a busybox Pod with hostAliases + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:137 + should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":67,"skipped":1174,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test when starting a container that exits + should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:40:30.163: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpa': should get the expected 'State' +STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpof': should get the expected 'State' +STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpn': should get the expected 'State' +STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:41:06.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-1100" for this suite. + +• [SLOW TEST:36.467 seconds] +[sig-node] Container Runtime +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + blackbox test + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 + when starting a container that exits + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42 + should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":346,"completed":68,"skipped":1239,"failed":0} +[sig-storage] Projected downwardAPI + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:41:06.631: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Aug 3 06:41:06.790: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e1700f1-fc1d-4562-85e7-a3b4b9d22dee" in namespace "projected-1556" to be "Succeeded or Failed" +Aug 3 06:41:06.801: INFO: Pod "downwardapi-volume-8e1700f1-fc1d-4562-85e7-a3b4b9d22dee": Phase="Pending", Reason="", readiness=false. Elapsed: 10.698273ms +Aug 3 06:41:08.816: INFO: Pod "downwardapi-volume-8e1700f1-fc1d-4562-85e7-a3b4b9d22dee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025523061s +Aug 3 06:41:10.830: INFO: Pod "downwardapi-volume-8e1700f1-fc1d-4562-85e7-a3b4b9d22dee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040380926s +STEP: Saw pod success +Aug 3 06:41:10.831: INFO: Pod "downwardapi-volume-8e1700f1-fc1d-4562-85e7-a3b4b9d22dee" satisfied condition "Succeeded or Failed" +Aug 3 06:41:10.837: INFO: Trying to get logs from node dce-10-6-213-50 pod downwardapi-volume-8e1700f1-fc1d-4562-85e7-a3b4b9d22dee container client-container: +STEP: delete the pod +Aug 3 06:41:10.900: INFO: Waiting for pod downwardapi-volume-8e1700f1-fc1d-4562-85e7-a3b4b9d22dee to disappear +Aug 3 06:41:10.906: INFO: Pod downwardapi-volume-8e1700f1-fc1d-4562-85e7-a3b4b9d22dee no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:41:10.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1556" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":69,"skipped":1239,"failed":0} +SS +------------------------------ +[sig-node] Variable Expansion + should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:41:10.925: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod with failed condition +STEP: updating the pod +Aug 3 06:43:11.571: INFO: Successfully updated pod "var-expansion-50d19385-a831-452e-8d09-0355e3776532" +STEP: waiting for pod running +STEP: deleting the pod gracefully +Aug 3 06:43:13.597: INFO: Deleting pod "var-expansion-50d19385-a831-452e-8d09-0355e3776532" in namespace "var-expansion-4857" +Aug 3 06:43:13.622: INFO: Wait up to 5m0s for pod "var-expansion-50d19385-a831-452e-8d09-0355e3776532" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:43:47.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-4857" for this suite. + +• [SLOW TEST:156.740 seconds] +[sig-node] Variable Expansion +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":346,"completed":70,"skipped":1241,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:43:47.665: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-map-0f607150-6ab5-4204-834a-91803579b8e5 +STEP: Creating a pod to test consume secrets +Aug 3 06:43:47.743: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-729db4f9-8dca-4e78-a295-b2b11cc3c1f6" in namespace "projected-5634" to be "Succeeded or Failed" +Aug 3 06:43:47.758: INFO: Pod "pod-projected-secrets-729db4f9-8dca-4e78-a295-b2b11cc3c1f6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.72772ms +Aug 3 06:43:49.773: INFO: Pod "pod-projected-secrets-729db4f9-8dca-4e78-a295-b2b11cc3c1f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030353875s +Aug 3 06:43:51.789: INFO: Pod "pod-projected-secrets-729db4f9-8dca-4e78-a295-b2b11cc3c1f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046558082s +Aug 3 06:43:53.804: INFO: Pod "pod-projected-secrets-729db4f9-8dca-4e78-a295-b2b11cc3c1f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061492677s +STEP: Saw pod success +Aug 3 06:43:53.805: INFO: Pod "pod-projected-secrets-729db4f9-8dca-4e78-a295-b2b11cc3c1f6" satisfied condition "Succeeded or Failed" +Aug 3 06:43:53.810: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-projected-secrets-729db4f9-8dca-4e78-a295-b2b11cc3c1f6 container projected-secret-volume-test: +STEP: delete the pod +Aug 3 06:43:53.859: INFO: Waiting for pod pod-projected-secrets-729db4f9-8dca-4e78-a295-b2b11cc3c1f6 to disappear +Aug 3 06:43:53.866: INFO: Pod pod-projected-secrets-729db4f9-8dca-4e78-a295-b2b11cc3c1f6 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:43:53.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5634" for this suite. + +• [SLOW TEST:6.221 seconds] +[sig-storage] Projected secret +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":71,"skipped":1258,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:43:53.887: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test override arguments +Aug 3 06:43:53.970: INFO: Waiting up to 5m0s for pod "client-containers-e46ea22f-f207-465c-94bf-2d4df95855d4" in namespace "containers-632" to be "Succeeded or Failed" +Aug 3 06:43:53.979: INFO: Pod "client-containers-e46ea22f-f207-465c-94bf-2d4df95855d4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.994705ms +Aug 3 06:43:55.998: INFO: Pod "client-containers-e46ea22f-f207-465c-94bf-2d4df95855d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027685497s +Aug 3 06:43:58.005: INFO: Pod "client-containers-e46ea22f-f207-465c-94bf-2d4df95855d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034751166s +STEP: Saw pod success +Aug 3 06:43:58.005: INFO: Pod "client-containers-e46ea22f-f207-465c-94bf-2d4df95855d4" satisfied condition "Succeeded or Failed" +Aug 3 06:43:58.010: INFO: Trying to get logs from node dce-10-6-213-50 pod client-containers-e46ea22f-f207-465c-94bf-2d4df95855d4 container agnhost-container: +STEP: delete the pod +Aug 3 06:43:58.053: INFO: Waiting for pod client-containers-e46ea22f-f207-465c-94bf-2d4df95855d4 to disappear +Aug 3 06:43:58.062: INFO: Pod client-containers-e46ea22f-f207-465c-94bf-2d4df95855d4 no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:43:58.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-632" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":346,"completed":72,"skipped":1266,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:43:58.084: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 +[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod test-webserver-dcfcf14a-82e4-4330-acfe-4e2c4eb57f3d in namespace container-probe-1419 +Aug 3 06:44:02.197: INFO: Started pod test-webserver-dcfcf14a-82e4-4330-acfe-4e2c4eb57f3d in namespace container-probe-1419 +STEP: checking the pod's current state and verifying that restartCount is present +Aug 3 06:44:02.203: INFO: Initial restart count of pod test-webserver-dcfcf14a-82e4-4330-acfe-4e2c4eb57f3d is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:48:03.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-1419" for this suite. + +• [SLOW TEST:245.864 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":73,"skipped":1277,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] KubeletManagedEtcHosts + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] KubeletManagedEtcHosts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:48:03.948: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Setting up the test +STEP: Creating hostNetwork=false pod +Aug 3 06:48:04.048: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:48:06.061: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:48:08.064: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:48:10.055: INFO: The status of Pod test-pod is Running (Ready = true) +STEP: Creating hostNetwork=true pod +Aug 3 06:48:10.085: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:48:12.098: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:48:14.106: INFO: The status of Pod test-host-network-pod is Running (Ready = true) +STEP: Running the test +STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false +Aug 3 06:48:14.114: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1878 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 06:48:14.114: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 06:48:14.115: INFO: ExecWithOptions: Clientset creation +Aug 3 06:48:14.115: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-1878/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) +Aug 3 06:48:14.299: INFO: Exec stderr: "" +Aug 3 06:48:14.299: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1878 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 06:48:14.299: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 06:48:14.300: INFO: ExecWithOptions: Clientset creation +Aug 3 06:48:14.300: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-1878/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) +Aug 3 06:48:14.451: INFO: Exec stderr: "" +Aug 3 06:48:14.451: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1878 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 06:48:14.451: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 06:48:14.452: INFO: ExecWithOptions: Clientset creation +Aug 3 06:48:14.452: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-1878/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) +Aug 3 06:48:14.629: INFO: Exec stderr: "" +Aug 3 06:48:14.629: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1878 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 06:48:14.629: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 06:48:14.630: INFO: ExecWithOptions: Clientset creation +Aug 3 06:48:14.630: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-1878/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) +Aug 3 06:48:14.867: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount +Aug 3 06:48:14.867: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1878 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 06:48:14.867: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 06:48:14.868: INFO: ExecWithOptions: Clientset creation +Aug 3 06:48:14.868: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-1878/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-3&container=busybox-3&stderr=true&stdout=true %!s(MISSING)) +Aug 3 06:48:15.039: INFO: Exec stderr: "" +Aug 3 06:48:15.039: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1878 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 06:48:15.039: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 06:48:15.040: INFO: ExecWithOptions: Clientset creation +Aug 3 06:48:15.040: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-1878/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-3&container=busybox-3&stderr=true&stdout=true %!s(MISSING)) +Aug 3 06:48:15.227: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true +Aug 3 06:48:15.227: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1878 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 06:48:15.227: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 06:48:15.228: INFO: ExecWithOptions: Clientset creation +Aug 3 06:48:15.228: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-1878/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) +Aug 3 06:48:15.490: INFO: Exec stderr: "" +Aug 3 06:48:15.490: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1878 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 06:48:15.491: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 06:48:15.492: INFO: ExecWithOptions: Clientset creation +Aug 3 06:48:15.492: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-1878/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) +Aug 3 06:48:15.670: INFO: Exec stderr: "" +Aug 3 06:48:15.670: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1878 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 06:48:15.670: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 06:48:15.671: INFO: ExecWithOptions: Clientset creation +Aug 3 06:48:15.671: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-1878/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) +Aug 3 06:48:15.832: INFO: Exec stderr: "" +Aug 3 06:48:15.833: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1878 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 06:48:15.833: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 06:48:15.834: INFO: ExecWithOptions: Clientset creation +Aug 3 06:48:15.834: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-1878/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) +Aug 3 06:48:15.986: INFO: Exec stderr: "" +[AfterEach] [sig-node] KubeletManagedEtcHosts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:48:15.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-kubelet-etc-hosts-1878" for this suite. + +• [SLOW TEST:12.064 seconds] +[sig-node] KubeletManagedEtcHosts +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":74,"skipped":1289,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context when creating containers with AllowPrivilegeEscalation + should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:48:16.013: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename security-context-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 06:48:16.108: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-5aee159b-3d36-46c3-b853-1d5b4d88c471" in namespace "security-context-test-4465" to be "Succeeded or Failed" +Aug 3 06:48:16.124: INFO: Pod "alpine-nnp-false-5aee159b-3d36-46c3-b853-1d5b4d88c471": Phase="Pending", Reason="", readiness=false. Elapsed: 15.670596ms +Aug 3 06:48:18.142: INFO: Pod "alpine-nnp-false-5aee159b-3d36-46c3-b853-1d5b4d88c471": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033879814s +Aug 3 06:48:20.152: INFO: Pod "alpine-nnp-false-5aee159b-3d36-46c3-b853-1d5b4d88c471": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043835545s +Aug 3 06:48:20.152: INFO: Pod "alpine-nnp-false-5aee159b-3d36-46c3-b853-1d5b4d88c471" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:48:20.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-4465" for this suite. +•{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":75,"skipped":1318,"failed":0} + +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:48:20.212: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +[It] listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 06:48:20.264: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:48:26.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-5575" for this suite. + +• [SLOW TEST:6.720 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + Simple CustomResourceDefinition + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 + listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":346,"completed":76,"skipped":1318,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should update/patch PodDisruptionBudget status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:48:26.932: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename disruption +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should update/patch PodDisruptionBudget status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for the pdb to be processed +STEP: Updating PodDisruptionBudget status +STEP: Waiting for all pods to be running +Aug 3 06:48:27.095: INFO: running pods: 0 < 1 +Aug 3 06:48:29.105: INFO: running pods: 0 < 1 +STEP: locating a running pod +STEP: Waiting for the pdb to be processed +STEP: Patching PodDisruptionBudget status +STEP: Waiting for the pdb to be processed +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:48:31.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-5010" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":346,"completed":77,"skipped":1332,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:48:31.203: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 3 06:48:32.007: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Aug 3 06:48:34.035: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 6, 48, 31, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 6, 48, 31, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 6, 48, 32, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 6, 48, 31, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 3 06:48:37.082: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the webhook via the AdmissionRegistration API +STEP: create a pod that should be denied by the webhook +STEP: create a pod that causes the webhook to hang +STEP: create a configmap that should be denied by the webhook +STEP: create a configmap that should be admitted by the webhook +STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook +STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook +STEP: create a namespace that bypass the webhook +STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:48:47.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3047" for this suite. +STEP: Destroying namespace "webhook-3047-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:16.279 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":346,"completed":78,"skipped":1375,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] Projected secret + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:48:47.482: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name s-test-opt-del-d639fb7b-e6f4-4f57-b298-c75c810d795b +STEP: Creating secret with name s-test-opt-upd-d24e591e-d84c-40d0-9c1f-76c19d7190e5 +STEP: Creating the pod +Aug 3 06:48:47.637: INFO: The status of Pod pod-projected-secrets-935f24f4-5b97-4512-a217-7a3f45653c66 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:48:49.652: INFO: The status of Pod pod-projected-secrets-935f24f4-5b97-4512-a217-7a3f45653c66 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:48:51.664: INFO: The status of Pod pod-projected-secrets-935f24f4-5b97-4512-a217-7a3f45653c66 is Running (Ready = true) +STEP: Deleting secret s-test-opt-del-d639fb7b-e6f4-4f57-b298-c75c810d795b +STEP: Updating secret s-test-opt-upd-d24e591e-d84c-40d0-9c1f-76c19d7190e5 +STEP: Creating secret with name s-test-opt-create-ae51a366-41a9-4517-b649-13697b72d866 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:48:53.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4649" for this suite. + +• [SLOW TEST:6.445 seconds] +[sig-storage] Projected secret +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":79,"skipped":1383,"failed":0} +SSSS +------------------------------ +[sig-network] Services + should test the lifecycle of an Endpoint [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:48:53.927: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should test the lifecycle of an Endpoint [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating an Endpoint +STEP: waiting for available Endpoint +STEP: listing all Endpoints +STEP: updating the Endpoint +STEP: fetching the Endpoint +STEP: patching the Endpoint +STEP: fetching the Endpoint +STEP: deleting the Endpoint by Collection +STEP: waiting for Endpoint deletion +STEP: fetching the Endpoint +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:48:54.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-4846" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":346,"completed":80,"skipped":1387,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should patch a Namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:48:54.106: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename namespaces +STEP: Waiting for a default service account to be provisioned in namespace +[It] should patch a Namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Namespace +STEP: patching the Namespace +STEP: get the Namespace and ensuring it has the label +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:48:54.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-8396" for this suite. +STEP: Destroying namespace "nspatchtest-7c9794b9-0478-40dc-a5cc-007ac2d6bd70-7463" for this suite. +•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":346,"completed":81,"skipped":1404,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Lease + lease API should be available [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Lease + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:48:54.271: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename lease-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] lease API should be available [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Lease + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:48:54.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "lease-test-6748" for this suite. +•{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":346,"completed":82,"skipped":1430,"failed":0} +SSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + should include custom resource definition resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:48:54.502: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +[It] should include custom resource definition resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: fetching the /apis discovery document +STEP: finding the apiextensions.k8s.io API group in the /apis discovery document +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document +STEP: fetching the /apis/apiextensions.k8s.io discovery document +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document +STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document +STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:48:54.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-3848" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":346,"completed":83,"skipped":1438,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to start watching from a specific resource version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:48:54.693: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to start watching from a specific resource version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: modifying the configmap a second time +STEP: deleting the configmap +STEP: creating a watch on configmaps from the resource version returned by the first update +STEP: Expecting to observe notifications for all changes to the configmap after the first update +Aug 3 06:48:54.930: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8235 5e1a1fb7-a25b-4985-a081-d7ac7906bf1f 611537 0 2022-08-03 06:48:54 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 3 06:48:54.930: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8235 5e1a1fb7-a25b-4985-a081-d7ac7906bf1f 611538 0 2022-08-03 06:48:54 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:48:54.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-8235" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":346,"completed":84,"skipped":1454,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should be possible to delete [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:48:54.964: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 +[It] should be possible to delete [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:48:55.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-3232" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":346,"completed":85,"skipped":1552,"failed":0} +SSSS +------------------------------ +[sig-node] Pods Extended Pods Set QOS Class + should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods Extended + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:48:55.136: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Pods Set QOS Class + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:149 +[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying QOS class is set on the pod +[AfterEach] [sig-node] Pods Extended + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:48:55.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-8640" for this suite. +•{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":346,"completed":86,"skipped":1556,"failed":0} +SSSSS +------------------------------ +[sig-node] Secrets + should fail to create secret due to empty secret key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:48:55.324: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail to create secret due to empty secret key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name secret-emptykey-test-c85112e3-014e-4018-82a9-6e6010a1a017 +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:48:55.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-475" for this suite. +•{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":346,"completed":87,"skipped":1561,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:48:55.441: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc +STEP: delete the rc +STEP: wait for all pods to be garbage collected +STEP: Gathering metrics +Aug 3 06:49:05.710: INFO: The status of Pod dce-kube-controller-manager-dce-10-6-213-30 is Running (Ready = true) +Aug 3 06:50:06.127: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:50:06.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-6545" for this suite. + +• [SLOW TEST:70.728 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":346,"completed":88,"skipped":1566,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:50:06.170: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should invoke init containers on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Aug 3 06:50:06.261: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:50:12.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-8360" for this suite. + +• [SLOW TEST:6.225 seconds] +[sig-node] InitContainer [NodeConformance] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should invoke init containers on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":346,"completed":89,"skipped":1578,"failed":0} +SSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not conflict [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:50:12.395: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not conflict [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 06:50:12.487: INFO: The status of Pod pod-secrets-56be57ad-ba49-4f4e-9c25-e28113d13a71 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:50:14.503: INFO: The status of Pod pod-secrets-56be57ad-ba49-4f4e-9c25-e28113d13a71 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:50:16.500: INFO: The status of Pod pod-secrets-56be57ad-ba49-4f4e-9c25-e28113d13a71 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:50:18.502: INFO: The status of Pod pod-secrets-56be57ad-ba49-4f4e-9c25-e28113d13a71 is Running (Ready = true) +STEP: Cleaning up the secret +STEP: Cleaning up the configmap +STEP: Cleaning up the pod +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:50:18.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-7662" for this suite. + +• [SLOW TEST:6.192 seconds] +[sig-storage] EmptyDir wrapper volumes +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + should not conflict [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":346,"completed":90,"skipped":1583,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Servers with support for Table transformation + should return a 406 for a backend which does not implement metadata [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:50:18.588: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename tables +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 +[It] should return a 406 for a backend which does not implement metadata [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:50:18.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "tables-708" for this suite. +•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":346,"completed":91,"skipped":1608,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should verify changes to a daemon set status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:50:18.726: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:143 +[It] should verify changes to a daemon set status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Aug 3 06:50:18.964: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:50:18.964: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:50:18.964: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:50:18.969: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:50:18.969: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 06:50:19.998: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:50:19.998: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:50:19.998: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:50:20.014: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:50:20.014: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 06:50:20.985: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:50:20.985: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:50:20.985: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:50:20.992: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:50:20.992: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 06:50:21.997: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:50:21.998: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:50:21.998: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:50:22.004: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:50:22.004: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 06:50:22.992: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:50:22.992: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:50:22.992: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:50:22.999: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 3 06:50:22.999: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 06:50:23.991: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:50:23.991: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:50:23.991: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:50:24.002: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Aug 3 06:50:24.002: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +STEP: Getting /status +Aug 3 06:50:24.022: INFO: Daemon Set daemon-set has Conditions: [] +STEP: updating the DaemonSet Status +Aug 3 06:50:24.043: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the daemon set status to be updated +Aug 3 06:50:24.047: INFO: Observed &DaemonSet event: ADDED +Aug 3 06:50:24.048: INFO: Observed &DaemonSet event: MODIFIED +Aug 3 06:50:24.048: INFO: Observed &DaemonSet event: MODIFIED +Aug 3 06:50:24.048: INFO: Observed &DaemonSet event: MODIFIED +Aug 3 06:50:24.048: INFO: Found daemon set daemon-set in namespace daemonsets-2357 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Aug 3 06:50:24.048: INFO: Daemon set daemon-set has an updated status +STEP: patching the DaemonSet Status +STEP: watching for the daemon set status to be patched +Aug 3 06:50:24.067: INFO: Observed &DaemonSet event: ADDED +Aug 3 06:50:24.067: INFO: Observed &DaemonSet event: MODIFIED +Aug 3 06:50:24.067: INFO: Observed &DaemonSet event: MODIFIED +Aug 3 06:50:24.068: INFO: Observed &DaemonSet event: MODIFIED +Aug 3 06:50:24.068: INFO: Observed daemon set daemon-set in namespace daemonsets-2357 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Aug 3 06:50:24.068: INFO: Observed &DaemonSet event: MODIFIED +Aug 3 06:50:24.068: INFO: Found daemon set daemon-set in namespace daemonsets-2357 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] +Aug 3 06:50:24.068: INFO: Daemon set daemon-set has a patched status +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:109 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2357, will wait for the garbage collector to delete the pods +Aug 3 06:50:24.165: INFO: Deleting DaemonSet.extensions daemon-set took: 18.354819ms +Aug 3 06:50:24.266: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.322208ms +Aug 3 06:50:28.588: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:50:28.589: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Aug 3 06:50:28.597: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"612204"},"items":null} + +Aug 3 06:50:28.604: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"612204"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:50:28.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-2357" for this suite. + +• [SLOW TEST:9.918 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should verify changes to a daemon set status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","total":346,"completed":92,"skipped":1618,"failed":0} +S +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:50:28.645: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 3 06:50:29.808: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Aug 3 06:50:31.874: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 6, 50, 29, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 6, 50, 29, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 6, 50, 29, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 6, 50, 29, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 06:50:33.957: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 6, 50, 29, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 6, 50, 29, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 6, 50, 29, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 6, 50, 29, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 06:50:35.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 6, 50, 29, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 6, 50, 29, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 6, 50, 29, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 6, 50, 29, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 3 06:50:38.904: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Creating a dummy validating-webhook-configuration object +STEP: Deleting the validating-webhook-configuration, which should be possible to remove +STEP: Creating a dummy mutating-webhook-configuration object +STEP: Deleting the mutating-webhook-configuration, which should be possible to remove +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:50:39.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-4233" for this suite. +STEP: Destroying namespace "webhook-4233-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:10.524 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":346,"completed":93,"skipped":1619,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert from CR v1 to CR v2 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:50:39.170: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename crd-webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 +STEP: Setting up server cert +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication +STEP: Deploying the custom resource conversion webhook pod +STEP: Wait for the deployment to be ready +Aug 3 06:50:39.425: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set +Aug 3 06:50:41.448: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 6, 50, 39, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 6, 50, 39, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 6, 50, 39, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 6, 50, 39, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-bb9577b7b\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 06:50:43.472: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 6, 50, 39, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 6, 50, 39, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 6, 50, 39, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 6, 50, 39, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-bb9577b7b\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 3 06:50:46.484: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert from CR v1 to CR v2 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 06:50:46.494: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Creating a v1 custom resource +STEP: v2 custom resource should be converted +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:50:49.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-webhook-5255" for this suite. +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 + +• [SLOW TEST:10.657 seconds] +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to convert from CR v1 to CR v2 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":346,"completed":94,"skipped":1655,"failed":0} +SSS +------------------------------ +[sig-network] DNS + should provide DNS for ExternalName services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:50:49.828: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for ExternalName services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test externalName service +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7888.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7888.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7888.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7888.svc.cluster.local; sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Aug 3 06:50:55.966: INFO: DNS probes using dns-test-0a80703d-a22e-45a0-a989-fe08801471f9 succeeded + +STEP: deleting the pod +STEP: changing the externalName to bar.example.com +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7888.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7888.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7888.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7888.svc.cluster.local; sleep 1; done + +STEP: creating a second pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Aug 3 06:51:02.052: INFO: File wheezy_udp@dns-test-service-3.dns-7888.svc.cluster.local from pod dns-7888/dns-test-d904b99f-849f-45ef-9db9-76de22b43920 contains 'foo.example.com. +' instead of 'bar.example.com.' +Aug 3 06:51:02.060: INFO: File jessie_udp@dns-test-service-3.dns-7888.svc.cluster.local from pod dns-7888/dns-test-d904b99f-849f-45ef-9db9-76de22b43920 contains 'foo.example.com. +' instead of 'bar.example.com.' +Aug 3 06:51:02.060: INFO: Lookups using dns-7888/dns-test-d904b99f-849f-45ef-9db9-76de22b43920 failed for: [wheezy_udp@dns-test-service-3.dns-7888.svc.cluster.local jessie_udp@dns-test-service-3.dns-7888.svc.cluster.local] + +Aug 3 06:51:07.068: INFO: File wheezy_udp@dns-test-service-3.dns-7888.svc.cluster.local from pod dns-7888/dns-test-d904b99f-849f-45ef-9db9-76de22b43920 contains 'foo.example.com. +' instead of 'bar.example.com.' +Aug 3 06:51:07.075: INFO: File jessie_udp@dns-test-service-3.dns-7888.svc.cluster.local from pod dns-7888/dns-test-d904b99f-849f-45ef-9db9-76de22b43920 contains '' instead of 'bar.example.com.' +Aug 3 06:51:07.075: INFO: Lookups using dns-7888/dns-test-d904b99f-849f-45ef-9db9-76de22b43920 failed for: [wheezy_udp@dns-test-service-3.dns-7888.svc.cluster.local jessie_udp@dns-test-service-3.dns-7888.svc.cluster.local] + +Aug 3 06:51:12.068: INFO: File wheezy_udp@dns-test-service-3.dns-7888.svc.cluster.local from pod dns-7888/dns-test-d904b99f-849f-45ef-9db9-76de22b43920 contains 'foo.example.com. +' instead of 'bar.example.com.' +Aug 3 06:51:12.073: INFO: File jessie_udp@dns-test-service-3.dns-7888.svc.cluster.local from pod dns-7888/dns-test-d904b99f-849f-45ef-9db9-76de22b43920 contains 'foo.example.com. +' instead of 'bar.example.com.' +Aug 3 06:51:12.073: INFO: Lookups using dns-7888/dns-test-d904b99f-849f-45ef-9db9-76de22b43920 failed for: [wheezy_udp@dns-test-service-3.dns-7888.svc.cluster.local jessie_udp@dns-test-service-3.dns-7888.svc.cluster.local] + +Aug 3 06:51:17.066: INFO: File wheezy_udp@dns-test-service-3.dns-7888.svc.cluster.local from pod dns-7888/dns-test-d904b99f-849f-45ef-9db9-76de22b43920 contains 'foo.example.com. +' instead of 'bar.example.com.' +Aug 3 06:51:17.071: INFO: File jessie_udp@dns-test-service-3.dns-7888.svc.cluster.local from pod dns-7888/dns-test-d904b99f-849f-45ef-9db9-76de22b43920 contains 'foo.example.com. +' instead of 'bar.example.com.' +Aug 3 06:51:17.071: INFO: Lookups using dns-7888/dns-test-d904b99f-849f-45ef-9db9-76de22b43920 failed for: [wheezy_udp@dns-test-service-3.dns-7888.svc.cluster.local jessie_udp@dns-test-service-3.dns-7888.svc.cluster.local] + +Aug 3 06:51:22.069: INFO: File wheezy_udp@dns-test-service-3.dns-7888.svc.cluster.local from pod dns-7888/dns-test-d904b99f-849f-45ef-9db9-76de22b43920 contains 'foo.example.com. +' instead of 'bar.example.com.' +Aug 3 06:51:22.075: INFO: File jessie_udp@dns-test-service-3.dns-7888.svc.cluster.local from pod dns-7888/dns-test-d904b99f-849f-45ef-9db9-76de22b43920 contains 'foo.example.com. +' instead of 'bar.example.com.' +Aug 3 06:51:22.075: INFO: Lookups using dns-7888/dns-test-d904b99f-849f-45ef-9db9-76de22b43920 failed for: [wheezy_udp@dns-test-service-3.dns-7888.svc.cluster.local jessie_udp@dns-test-service-3.dns-7888.svc.cluster.local] + +Aug 3 06:51:27.078: INFO: DNS probes using dns-test-d904b99f-849f-45ef-9db9-76de22b43920 succeeded + +STEP: deleting the pod +STEP: changing the service to type=ClusterIP +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7888.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7888.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7888.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7888.svc.cluster.local; sleep 1; done + +STEP: creating a third pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Aug 3 06:51:31.211: INFO: DNS probes using dns-test-ce238156-17d2-42d7-ab40-572ac39e5dca succeeded + +STEP: deleting the pod +STEP: deleting the test externalName service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:51:31.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-7888" for this suite. + +• [SLOW TEST:41.446 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should provide DNS for ExternalName services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":346,"completed":95,"skipped":1658,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:51:31.275: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test env composition +Aug 3 06:51:31.340: INFO: Waiting up to 5m0s for pod "var-expansion-fb9a24f5-4b7f-4264-8f87-b32872e4a019" in namespace "var-expansion-1273" to be "Succeeded or Failed" +Aug 3 06:51:31.349: INFO: Pod "var-expansion-fb9a24f5-4b7f-4264-8f87-b32872e4a019": Phase="Pending", Reason="", readiness=false. Elapsed: 9.41912ms +Aug 3 06:51:33.360: INFO: Pod "var-expansion-fb9a24f5-4b7f-4264-8f87-b32872e4a019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020129604s +Aug 3 06:51:35.372: INFO: Pod "var-expansion-fb9a24f5-4b7f-4264-8f87-b32872e4a019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032232918s +STEP: Saw pod success +Aug 3 06:51:35.372: INFO: Pod "var-expansion-fb9a24f5-4b7f-4264-8f87-b32872e4a019" satisfied condition "Succeeded or Failed" +Aug 3 06:51:35.382: INFO: Trying to get logs from node dce-10-6-213-50 pod var-expansion-fb9a24f5-4b7f-4264-8f87-b32872e4a019 container dapi-container: +STEP: delete the pod +Aug 3 06:51:35.442: INFO: Waiting for pod var-expansion-fb9a24f5-4b7f-4264-8f87-b32872e4a019 to disappear +Aug 3 06:51:35.447: INFO: Pod var-expansion-fb9a24f5-4b7f-4264-8f87-b32872e4a019 no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:51:35.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-1273" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":346,"completed":96,"skipped":1698,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:51:35.474: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 06:51:35.600: INFO: created pod +Aug 3 06:51:35.600: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-543" to be "Succeeded or Failed" +Aug 3 06:51:35.617: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 16.513174ms +Aug 3 06:51:37.638: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038037213s +Aug 3 06:51:39.644: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04422239s +Aug 3 06:51:41.658: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.057800546s +STEP: Saw pod success +Aug 3 06:51:41.658: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" +Aug 3 06:52:11.658: INFO: polling logs +Aug 3 06:52:11.690: INFO: Pod logs: +2022/08/03 06:51:39 OK: Got token +2022/08/03 06:51:39 validating with in-cluster discovery +2022/08/03 06:51:39 OK: got issuer https://kubernetes.default.svc +2022/08/03 06:51:39 Full, not-validated claims: +openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc", Subject:"system:serviceaccount:svcaccounts-543:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1659510095, NotBefore:1659509495, IssuedAt:1659509495, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-543", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"23f9b784-8875-478f-81dd-301403e26332"}}} +2022/08/03 06:51:39 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc +2022/08/03 06:51:39 OK: Validated signature on JWT +2022/08/03 06:51:39 OK: Got valid claims from token! +2022/08/03 06:51:39 Full, validated claims: +&openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc", Subject:"system:serviceaccount:svcaccounts-543:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1659510095, NotBefore:1659509495, IssuedAt:1659509495, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-543", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"23f9b784-8875-478f-81dd-301403e26332"}}} + +Aug 3 06:52:11.690: INFO: completed pod +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:52:11.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-543" for this suite. + +• [SLOW TEST:36.263 seconds] +[sig-auth] ServiceAccounts +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 + ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":346,"completed":97,"skipped":1730,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:52:11.737: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Aug 3 06:52:11.912: INFO: The status of Pod annotationupdatebc94a4b3-0669-41c2-b8f2-771aa6a1e99a is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:52:13.923: INFO: The status of Pod annotationupdatebc94a4b3-0669-41c2-b8f2-771aa6a1e99a is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:52:15.925: INFO: The status of Pod annotationupdatebc94a4b3-0669-41c2-b8f2-771aa6a1e99a is Running (Ready = true) +Aug 3 06:52:16.465: INFO: Successfully updated pod "annotationupdatebc94a4b3-0669-41c2-b8f2-771aa6a1e99a" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:52:20.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2632" for this suite. + +• [SLOW TEST:8.795 seconds] +[sig-storage] Projected downwardAPI +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":98,"skipped":1753,"failed":0} +[sig-storage] Projected downwardAPI + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:52:20.532: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Aug 3 06:52:20.607: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d814e54-20ec-403a-a340-ffc5f4054606" in namespace "projected-6533" to be "Succeeded or Failed" +Aug 3 06:52:20.617: INFO: Pod "downwardapi-volume-0d814e54-20ec-403a-a340-ffc5f4054606": Phase="Pending", Reason="", readiness=false. Elapsed: 10.529663ms +Aug 3 06:52:22.629: INFO: Pod "downwardapi-volume-0d814e54-20ec-403a-a340-ffc5f4054606": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02189395s +Aug 3 06:52:24.640: INFO: Pod "downwardapi-volume-0d814e54-20ec-403a-a340-ffc5f4054606": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033231566s +STEP: Saw pod success +Aug 3 06:52:24.640: INFO: Pod "downwardapi-volume-0d814e54-20ec-403a-a340-ffc5f4054606" satisfied condition "Succeeded or Failed" +Aug 3 06:52:24.645: INFO: Trying to get logs from node dce-10-6-213-50 pod downwardapi-volume-0d814e54-20ec-403a-a340-ffc5f4054606 container client-container: +STEP: delete the pod +Aug 3 06:52:24.675: INFO: Waiting for pod downwardapi-volume-0d814e54-20ec-403a-a340-ffc5f4054606 to disappear +Aug 3 06:52:24.686: INFO: Pod downwardapi-volume-0d814e54-20ec-403a-a340-ffc5f4054606 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:52:24.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6533" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":99,"skipped":1753,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + pod should support shared volumes between containers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:52:24.711: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] pod should support shared volumes between containers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating Pod +STEP: Reading file content from the nginx-container +Aug 3 06:52:30.820: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1702 PodName:pod-sharedvolume-e9b7701e-ae18-4cbb-9114-8b8c3fdce83d ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 06:52:30.821: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 06:52:30.821: INFO: ExecWithOptions: Clientset creation +Aug 3 06:52:30.821: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/emptydir-1702/pods/pod-sharedvolume-e9b7701e-ae18-4cbb-9114-8b8c3fdce83d/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fusr%2Fshare%2Fvolumeshare%2Fshareddata.txt&container=busybox-main-container&container=busybox-main-container&stderr=true&stdout=true %!s(MISSING)) +Aug 3 06:52:30.981: INFO: Exec stderr: "" +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:52:30.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1702" for this suite. + +• [SLOW TEST:6.290 seconds] +[sig-storage] EmptyDir volumes +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + pod should support shared volumes between containers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":346,"completed":100,"skipped":1769,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:52:31.002: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-upd-95454392-a1fc-43f1-9410-572c23528d3f +STEP: Creating the pod +STEP: Waiting for pod with text data +STEP: Waiting for pod with binary data +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:52:37.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-9023" for this suite. + +• [SLOW TEST:6.184 seconds] +[sig-storage] ConfigMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":101,"skipped":1814,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:52:37.187: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-7365 +Aug 3 06:52:37.254: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:52:39.268: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) +Aug 3 06:52:39.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-7365 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' +Aug 3 06:52:39.823: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" +Aug 3 06:52:39.823: INFO: stdout: "iptables" +Aug 3 06:52:39.823: INFO: proxyMode: iptables +Aug 3 06:52:39.850: INFO: Waiting for pod kube-proxy-mode-detector to disappear +Aug 3 06:52:39.856: INFO: Pod kube-proxy-mode-detector no longer exists +STEP: creating service affinity-nodeport-timeout in namespace services-7365 +STEP: creating replication controller affinity-nodeport-timeout in namespace services-7365 +I0803 06:52:39.896596 21 runners.go:193] Created replication controller with name: affinity-nodeport-timeout, namespace: services-7365, replica count: 3 +I0803 06:52:42.952634 21 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0803 06:52:45.953896 21 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 3 06:52:45.990: INFO: Creating new exec pod +Aug 3 06:52:51.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-7365 exec execpod-affinityn8kgs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' +Aug 3 06:52:51.312: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport-timeout 80\n+ echo hostName\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" +Aug 3 06:52:51.312: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 06:52:51.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-7365 exec execpod-affinityn8kgs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.23.224 80' +Aug 3 06:52:51.591: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.23.224 80\nConnection to 172.31.23.224 80 port [tcp/http] succeeded!\n" +Aug 3 06:52:51.591: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 06:52:51.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-7365 exec execpod-affinityn8kgs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.6.213.40 30094' +Aug 3 06:52:51.843: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.6.213.40 30094\nConnection to 10.6.213.40 30094 port [tcp/*] succeeded!\n" +Aug 3 06:52:51.843: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 06:52:51.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-7365 exec execpod-affinityn8kgs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.6.213.50 30094' +Aug 3 06:52:52.086: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.6.213.50 30094\nConnection to 10.6.213.50 30094 port [tcp/*] succeeded!\n" +Aug 3 06:52:52.086: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 06:52:52.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-7365 exec execpod-affinityn8kgs -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.6.213.40:30094/ ; done' +Aug 3 06:52:52.508: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30094/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30094/\n" +Aug 3 06:52:52.508: INFO: stdout: "\naffinity-nodeport-timeout-x5tfd\naffinity-nodeport-timeout-x5tfd\naffinity-nodeport-timeout-x5tfd\naffinity-nodeport-timeout-x5tfd\naffinity-nodeport-timeout-x5tfd\naffinity-nodeport-timeout-x5tfd\naffinity-nodeport-timeout-x5tfd\naffinity-nodeport-timeout-x5tfd\naffinity-nodeport-timeout-x5tfd\naffinity-nodeport-timeout-x5tfd\naffinity-nodeport-timeout-x5tfd\naffinity-nodeport-timeout-x5tfd\naffinity-nodeport-timeout-x5tfd\naffinity-nodeport-timeout-x5tfd\naffinity-nodeport-timeout-x5tfd\naffinity-nodeport-timeout-x5tfd" +Aug 3 06:52:52.508: INFO: Received response from host: affinity-nodeport-timeout-x5tfd +Aug 3 06:52:52.508: INFO: Received response from host: affinity-nodeport-timeout-x5tfd +Aug 3 06:52:52.508: INFO: Received response from host: affinity-nodeport-timeout-x5tfd +Aug 3 06:52:52.508: INFO: Received response from host: affinity-nodeport-timeout-x5tfd +Aug 3 06:52:52.508: INFO: Received response from host: affinity-nodeport-timeout-x5tfd +Aug 3 06:52:52.508: INFO: Received response from host: affinity-nodeport-timeout-x5tfd +Aug 3 06:52:52.508: INFO: Received response from host: affinity-nodeport-timeout-x5tfd +Aug 3 06:52:52.508: INFO: Received response from host: affinity-nodeport-timeout-x5tfd +Aug 3 06:52:52.508: INFO: Received response from host: affinity-nodeport-timeout-x5tfd +Aug 3 06:52:52.508: INFO: Received response from host: affinity-nodeport-timeout-x5tfd +Aug 3 06:52:52.508: INFO: Received response from host: affinity-nodeport-timeout-x5tfd +Aug 3 06:52:52.508: INFO: Received response from host: affinity-nodeport-timeout-x5tfd +Aug 3 06:52:52.508: INFO: Received response from host: affinity-nodeport-timeout-x5tfd +Aug 3 06:52:52.508: INFO: Received response from host: affinity-nodeport-timeout-x5tfd +Aug 3 06:52:52.508: INFO: Received response from host: affinity-nodeport-timeout-x5tfd +Aug 3 06:52:52.508: INFO: Received response from host: affinity-nodeport-timeout-x5tfd +Aug 3 06:52:52.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-7365 exec execpod-affinityn8kgs -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.6.213.40:30094/' +Aug 3 06:52:52.842: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.6.213.40:30094/\n" +Aug 3 06:52:52.842: INFO: stdout: "affinity-nodeport-timeout-x5tfd" +Aug 3 06:53:12.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-7365 exec execpod-affinityn8kgs -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.6.213.40:30094/' +Aug 3 06:53:13.107: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.6.213.40:30094/\n" +Aug 3 06:53:13.107: INFO: stdout: "affinity-nodeport-timeout-4h642" +Aug 3 06:53:13.107: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-7365, will wait for the garbage collector to delete the pods +Aug 3 06:53:13.198: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 12.365942ms +Aug 3 06:53:13.299: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 101.089459ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:53:17.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-7365" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:40.580 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":102,"skipped":1836,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount an API token into pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:53:17.768: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should mount an API token into pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting the auto-created API token +STEP: reading a file in the container +Aug 3 06:53:22.395: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7386 pod-service-account-824ec295-9a33-4b69-ba56-e16c5cc0bd13 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' +STEP: reading a file in the container +Aug 3 06:53:22.704: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7386 pod-service-account-824ec295-9a33-4b69-ba56-e16c5cc0bd13 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' +STEP: reading a file in the container +Aug 3 06:53:23.020: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7386 pod-service-account-824ec295-9a33-4b69-ba56-e16c5cc0bd13 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:53:23.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-7386" for this suite. + +• [SLOW TEST:5.612 seconds] +[sig-auth] ServiceAccounts +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 + should mount an API token into pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":346,"completed":103,"skipped":1852,"failed":0} +S +------------------------------ +[sig-network] EndpointSlice + should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:53:23.380: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename endpointslice +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:53:25.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-281" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":346,"completed":104,"skipped":1853,"failed":0} +S +------------------------------ +[sig-apps] CronJob + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:53:25.548: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename cronjob +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ForbidConcurrent cronjob +STEP: Ensuring a job is scheduled +STEP: Ensuring exactly one is scheduled +STEP: Ensuring exactly one running job exists by listing jobs explicitly +STEP: Ensuring no more jobs are scheduled +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:59:01.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-6933" for this suite. + +• [SLOW TEST:336.184 seconds] +[sig-apps] CronJob +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":346,"completed":105,"skipped":1854,"failed":0} +SSSSS +------------------------------ +[sig-storage] Downward API volume + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:59:01.733: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Aug 3 06:59:01.897: INFO: The status of Pod annotationupdateded3c092-ed03-418b-a04c-f46407ed0bbf is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:59:03.908: INFO: The status of Pod annotationupdateded3c092-ed03-418b-a04c-f46407ed0bbf is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:59:05.907: INFO: The status of Pod annotationupdateded3c092-ed03-418b-a04c-f46407ed0bbf is Pending, waiting for it to be Running (with Ready = true) +Aug 3 06:59:07.913: INFO: The status of Pod annotationupdateded3c092-ed03-418b-a04c-f46407ed0bbf is Running (Ready = true) +Aug 3 06:59:08.484: INFO: Successfully updated pod "annotationupdateded3c092-ed03-418b-a04c-f46407ed0bbf" +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:59:10.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4049" for this suite. + +• [SLOW TEST:8.813 seconds] +[sig-storage] Downward API volume +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":106,"skipped":1859,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should list and delete a collection of DaemonSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:59:10.546: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:143 +[It] should list and delete a collection of DaemonSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Aug 3 06:59:10.681: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:59:10.681: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:59:10.681: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:59:10.690: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:59:10.690: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 06:59:11.712: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:59:11.712: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:59:11.712: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:59:11.723: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:59:11.723: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 06:59:12.709: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:59:12.709: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:59:12.709: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:59:12.716: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:59:12.716: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 06:59:13.714: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:59:13.714: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:59:13.714: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:59:13.720: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:59:13.720: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 06:59:14.711: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:59:14.711: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:59:14.711: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:59:14.730: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 06:59:14.730: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 06:59:15.702: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:59:15.702: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:59:15.702: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 06:59:15.711: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Aug 3 06:59:15.711: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +STEP: listing all DeamonSets +STEP: DeleteCollection of the DaemonSets +STEP: Verify that ReplicaSets have been deleted +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:109 +Aug 3 06:59:15.770: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"614806"},"items":null} + +Aug 3 06:59:15.804: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"614806"},"items":[{"metadata":{"name":"daemon-set-8gvhk","generateName":"daemon-set-","namespace":"daemonsets-6519","uid":"5c3ad044-7ca3-40b3-85fa-971c8f7ba751","resourceVersion":"614797","creationTimestamp":"2022-08-03T06:59:10Z","labels":{"controller-revision-hash":"5b46c58f6f","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/ipv4pools":"[\"default-ipv4-ippool\"]","dce.daocloud.io/parcel.egress.burst":"0","dce.daocloud.io/parcel.egress.rate":"0","dce.daocloud.io/parcel.ingress.burst":"0","dce.daocloud.io/parcel.ingress.rate":"0"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"d3440bbf-859e-4686-bcae-3306493cfe38","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"kube-api-access-qgg8q","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-qgg8q","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"dce-10-6-213-50","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["dce-10-6-213-50"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-08-03T06:59:10Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-08-03T06:59:14Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-08-03T06:59:14Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-08-03T06:59:10Z"}],"hostIP":"10.6.213.50","podIP":"172.29.175.7","podIPs":[{"ip":"172.29.175.7"}],"startTime":"2022-08-03T06:59:10Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2022-08-03T06:59:14Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"docker://a60cf65bbab5ce17a64a4632f0c6c0712fcbdf106129b029b4aa6cde107480e7","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-xmnb2","generateName":"daemon-set-","namespace":"daemonsets-6519","uid":"74dfd89d-33d8-4868-92e5-7c680e372618","resourceVersion":"614795","creationTimestamp":"2022-08-03T06:59:10Z","labels":{"controller-revision-hash":"5b46c58f6f","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/ipv4pools":"[\"default-ipv4-ippool\"]","dce.daocloud.io/parcel.egress.burst":"0","dce.daocloud.io/parcel.egress.rate":"0","dce.daocloud.io/parcel.ingress.burst":"0","dce.daocloud.io/parcel.ingress.rate":"0"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"d3440bbf-859e-4686-bcae-3306493cfe38","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"kube-api-access-nfh26","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-nfh26","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"dce-10-6-213-40","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["dce-10-6-213-40"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-08-03T06:59:10Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-08-03T06:59:14Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-08-03T06:59:14Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-08-03T06:59:10Z"}],"hostIP":"10.6.213.40","podIP":"172.29.31.115","podIPs":[{"ip":"172.29.31.115"}],"startTime":"2022-08-03T06:59:10Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2022-08-03T06:59:13Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"docker://2e80641aeb7166810485edc7f47a9174ebfd6f99fe7a3b8d9cd14193a1559229","started":true}],"qosClass":"BestEffort"}}]} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 06:59:15.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-6519" for this suite. + +• [SLOW TEST:5.312 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should list and delete a collection of DaemonSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","total":346,"completed":107,"skipped":1898,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] Probing container + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 06:59:15.859: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 +[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod busybox-27251cf3-7b57-4b45-9bcb-244a4da695db in namespace container-probe-2366 +Aug 3 06:59:21.969: INFO: Started pod busybox-27251cf3-7b57-4b45-9bcb-244a4da695db in namespace container-probe-2366 +STEP: checking the pod's current state and verifying that restartCount is present +Aug 3 06:59:21.979: INFO: Initial restart count of pod busybox-27251cf3-7b57-4b45-9bcb-244a4da695db is 0 +Aug 3 07:00:08.268: INFO: Restart count of pod container-probe-2366/busybox-27251cf3-7b57-4b45-9bcb-244a4da695db is now 1 (46.288896996s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:00:08.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-2366" for this suite. + +• [SLOW TEST:52.500 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":108,"skipped":1908,"failed":0} +[sig-storage] Projected secret + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:00:08.359: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-6a6d13bb-768b-461b-8d4e-9274bc04f532 +STEP: Creating a pod to test consume secrets +Aug 3 07:00:08.453: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f5c1e3a1-39e2-47ea-8254-a97328553f3f" in namespace "projected-8973" to be "Succeeded or Failed" +Aug 3 07:00:08.461: INFO: Pod "pod-projected-secrets-f5c1e3a1-39e2-47ea-8254-a97328553f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.377587ms +Aug 3 07:00:10.469: INFO: Pod "pod-projected-secrets-f5c1e3a1-39e2-47ea-8254-a97328553f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015574139s +Aug 3 07:00:12.485: INFO: Pod "pod-projected-secrets-f5c1e3a1-39e2-47ea-8254-a97328553f3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032357764s +STEP: Saw pod success +Aug 3 07:00:12.485: INFO: Pod "pod-projected-secrets-f5c1e3a1-39e2-47ea-8254-a97328553f3f" satisfied condition "Succeeded or Failed" +Aug 3 07:00:12.492: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-projected-secrets-f5c1e3a1-39e2-47ea-8254-a97328553f3f container projected-secret-volume-test: +STEP: delete the pod +Aug 3 07:00:12.523: INFO: Waiting for pod pod-projected-secrets-f5c1e3a1-39e2-47ea-8254-a97328553f3f to disappear +Aug 3 07:00:12.527: INFO: Pod pod-projected-secrets-f5c1e3a1-39e2-47ea-8254-a97328553f3f no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:00:12.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8973" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":109,"skipped":1908,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:00:12.543: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Aug 3 07:00:12.612: INFO: Waiting up to 5m0s for pod "pod-446e7204-f998-43c2-887e-1a1ed0a0728f" in namespace "emptydir-6507" to be "Succeeded or Failed" +Aug 3 07:00:12.618: INFO: Pod "pod-446e7204-f998-43c2-887e-1a1ed0a0728f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.353011ms +Aug 3 07:00:14.635: INFO: Pod "pod-446e7204-f998-43c2-887e-1a1ed0a0728f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023311536s +Aug 3 07:00:16.648: INFO: Pod "pod-446e7204-f998-43c2-887e-1a1ed0a0728f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036718502s +STEP: Saw pod success +Aug 3 07:00:16.649: INFO: Pod "pod-446e7204-f998-43c2-887e-1a1ed0a0728f" satisfied condition "Succeeded or Failed" +Aug 3 07:00:16.655: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-446e7204-f998-43c2-887e-1a1ed0a0728f container test-container: +STEP: delete the pod +Aug 3 07:00:16.705: INFO: Waiting for pod pod-446e7204-f998-43c2-887e-1a1ed0a0728f to disappear +Aug 3 07:00:16.711: INFO: Pod pod-446e7204-f998-43c2-887e-1a1ed0a0728f no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:00:16.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-6507" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":110,"skipped":1922,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:00:16.741: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-upd-0768eea5-1c47-468b-b338-e69c52a449c5 +STEP: Creating the pod +Aug 3 07:00:16.859: INFO: The status of Pod pod-configmaps-8b4e8c27-146a-4fce-8de4-68473aaee90a is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:00:18.872: INFO: The status of Pod pod-configmaps-8b4e8c27-146a-4fce-8de4-68473aaee90a is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:00:20.869: INFO: The status of Pod pod-configmaps-8b4e8c27-146a-4fce-8de4-68473aaee90a is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:00:22.874: INFO: The status of Pod pod-configmaps-8b4e8c27-146a-4fce-8de4-68473aaee90a is Running (Ready = true) +STEP: Updating configmap configmap-test-upd-0768eea5-1c47-468b-b338-e69c52a449c5 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:01:41.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-7992" for this suite. + +• [SLOW TEST:84.841 seconds] +[sig-storage] ConfigMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":111,"skipped":1947,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-auth] Certificates API [Privileged:ClusterAdmin] + should support CSR API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:01:41.582: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename certificates +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support CSR API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/certificates.k8s.io +STEP: getting /apis/certificates.k8s.io/v1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Aug 3 07:01:42.591: INFO: starting watch +STEP: patching +STEP: updating +Aug 3 07:01:42.616: INFO: waiting for watch events with expected annotations +Aug 3 07:01:42.616: INFO: saw patched and updated annotations +STEP: getting /approval +STEP: patching /approval +STEP: updating /approval +STEP: getting /status +STEP: patching /status +STEP: updating /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:01:42.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "certificates-4879" for this suite. +•{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":346,"completed":112,"skipped":1960,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:01:42.758: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-ea1723c7-4ef4-4f7d-aefe-a62f877cb0e2 +STEP: Creating a pod to test consume secrets +Aug 3 07:01:42.857: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3d14fab1-ba67-48ea-9b11-04b82bdbece9" in namespace "projected-5981" to be "Succeeded or Failed" +Aug 3 07:01:42.863: INFO: Pod "pod-projected-secrets-3d14fab1-ba67-48ea-9b11-04b82bdbece9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.685697ms +Aug 3 07:01:44.879: INFO: Pod "pod-projected-secrets-3d14fab1-ba67-48ea-9b11-04b82bdbece9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021991954s +Aug 3 07:01:46.889: INFO: Pod "pod-projected-secrets-3d14fab1-ba67-48ea-9b11-04b82bdbece9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031665508s +STEP: Saw pod success +Aug 3 07:01:46.889: INFO: Pod "pod-projected-secrets-3d14fab1-ba67-48ea-9b11-04b82bdbece9" satisfied condition "Succeeded or Failed" +Aug 3 07:01:46.897: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-projected-secrets-3d14fab1-ba67-48ea-9b11-04b82bdbece9 container projected-secret-volume-test: +STEP: delete the pod +Aug 3 07:01:46.940: INFO: Waiting for pod pod-projected-secrets-3d14fab1-ba67-48ea-9b11-04b82bdbece9 to disappear +Aug 3 07:01:46.949: INFO: Pod pod-projected-secrets-3d14fab1-ba67-48ea-9b11-04b82bdbece9 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:01:46.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5981" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":113,"skipped":2005,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:01:46.976: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name projected-secret-test-4b0237ba-2b76-4e42-aa65-68b25aae6583 +STEP: Creating a pod to test consume secrets +Aug 3 07:01:47.048: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9c579052-538f-4578-973b-fb429074c9d8" in namespace "projected-1866" to be "Succeeded or Failed" +Aug 3 07:01:47.053: INFO: Pod "pod-projected-secrets-9c579052-538f-4578-973b-fb429074c9d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.90385ms +Aug 3 07:01:49.066: INFO: Pod "pod-projected-secrets-9c579052-538f-4578-973b-fb429074c9d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017935484s +Aug 3 07:01:51.077: INFO: Pod "pod-projected-secrets-9c579052-538f-4578-973b-fb429074c9d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028824916s +Aug 3 07:01:53.087: INFO: Pod "pod-projected-secrets-9c579052-538f-4578-973b-fb429074c9d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03896712s +STEP: Saw pod success +Aug 3 07:01:53.087: INFO: Pod "pod-projected-secrets-9c579052-538f-4578-973b-fb429074c9d8" satisfied condition "Succeeded or Failed" +Aug 3 07:01:53.094: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-projected-secrets-9c579052-538f-4578-973b-fb429074c9d8 container secret-volume-test: +STEP: delete the pod +Aug 3 07:01:53.128: INFO: Waiting for pod pod-projected-secrets-9c579052-538f-4578-973b-fb429074c9d8 to disappear +Aug 3 07:01:53.134: INFO: Pod pod-projected-secrets-9c579052-538f-4578-973b-fb429074c9d8 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:01:53.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1866" for this suite. + +• [SLOW TEST:6.178 seconds] +[sig-storage] Projected secret +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":114,"skipped":2018,"failed":0} +SSS +------------------------------ +[sig-auth] ServiceAccounts + should guarantee kube-root-ca.crt exist in any namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:01:53.154: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:01:53.230: INFO: Got root ca configmap in namespace "svcaccounts-284" +Aug 3 07:01:53.238: INFO: Deleted root ca configmap in namespace "svcaccounts-284" +STEP: waiting for a new root ca configmap created +Aug 3 07:01:53.747: INFO: Recreated root ca configmap in namespace "svcaccounts-284" +Aug 3 07:01:53.755: INFO: Updated root ca configmap in namespace "svcaccounts-284" +STEP: waiting for the root ca configmap reconciled +Aug 3 07:01:54.262: INFO: Reconciled root ca configmap in namespace "svcaccounts-284" +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:01:54.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-284" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":346,"completed":115,"skipped":2021,"failed":0} +SSSSSSSSS +------------------------------ +[sig-network] DNS + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:01:54.279: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3032 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3032;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3032 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3032;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3032.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3032.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3032.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3032.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3032.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3032.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3032.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3032.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3032.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3032.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3032.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3032.svc;check="$$(dig +notcp +noall +answer +search 168.164.31.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.31.164.168_udp@PTR;check="$$(dig +tcp +noall +answer +search 168.164.31.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.31.164.168_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3032 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3032;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3032 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3032;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3032.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3032.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3032.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3032.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3032.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3032.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3032.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3032.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3032.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3032.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3032.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3032.svc;check="$$(dig +notcp +noall +answer +search 168.164.31.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.31.164.168_udp@PTR;check="$$(dig +tcp +noall +answer +search 168.164.31.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.31.164.168_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Aug 3 07:02:00.406: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:00.412: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:00.419: INFO: Unable to read wheezy_udp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:00.424: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:00.430: INFO: Unable to read wheezy_udp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:00.435: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:00.440: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:00.447: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:00.480: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:00.490: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:00.496: INFO: Unable to read jessie_udp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:00.501: INFO: Unable to read jessie_tcp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:00.508: INFO: Unable to read jessie_udp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:00.513: INFO: Unable to read jessie_tcp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:00.518: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:00.522: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:00.540: INFO: Lookups using dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3032 wheezy_tcp@dns-test-service.dns-3032 wheezy_udp@dns-test-service.dns-3032.svc wheezy_tcp@dns-test-service.dns-3032.svc wheezy_udp@_http._tcp.dns-test-service.dns-3032.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3032.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3032 jessie_tcp@dns-test-service.dns-3032 jessie_udp@dns-test-service.dns-3032.svc jessie_tcp@dns-test-service.dns-3032.svc jessie_udp@_http._tcp.dns-test-service.dns-3032.svc jessie_tcp@_http._tcp.dns-test-service.dns-3032.svc] + +Aug 3 07:02:05.549: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:05.559: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:05.567: INFO: Unable to read wheezy_udp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:05.575: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:05.581: INFO: Unable to read wheezy_udp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:05.587: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:05.593: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:05.599: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:05.622: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:05.632: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:05.641: INFO: Unable to read jessie_udp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:05.647: INFO: Unable to read jessie_tcp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:05.653: INFO: Unable to read jessie_udp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:05.659: INFO: Unable to read jessie_tcp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:05.664: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:05.669: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:05.691: INFO: Lookups using dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3032 wheezy_tcp@dns-test-service.dns-3032 wheezy_udp@dns-test-service.dns-3032.svc wheezy_tcp@dns-test-service.dns-3032.svc wheezy_udp@_http._tcp.dns-test-service.dns-3032.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3032.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3032 jessie_tcp@dns-test-service.dns-3032 jessie_udp@dns-test-service.dns-3032.svc jessie_tcp@dns-test-service.dns-3032.svc jessie_udp@_http._tcp.dns-test-service.dns-3032.svc jessie_tcp@_http._tcp.dns-test-service.dns-3032.svc] + +Aug 3 07:02:10.550: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:10.556: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:10.562: INFO: Unable to read wheezy_udp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:10.567: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:10.573: INFO: Unable to read wheezy_udp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:10.580: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:10.586: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:10.596: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:10.652: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:10.657: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:10.669: INFO: Unable to read jessie_udp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:10.679: INFO: Unable to read jessie_tcp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:10.689: INFO: Unable to read jessie_udp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:10.698: INFO: Unable to read jessie_tcp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:10.708: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:10.719: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:10.759: INFO: Lookups using dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3032 wheezy_tcp@dns-test-service.dns-3032 wheezy_udp@dns-test-service.dns-3032.svc wheezy_tcp@dns-test-service.dns-3032.svc wheezy_udp@_http._tcp.dns-test-service.dns-3032.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3032.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3032 jessie_tcp@dns-test-service.dns-3032 jessie_udp@dns-test-service.dns-3032.svc jessie_tcp@dns-test-service.dns-3032.svc jessie_udp@_http._tcp.dns-test-service.dns-3032.svc jessie_tcp@_http._tcp.dns-test-service.dns-3032.svc] + +Aug 3 07:02:15.547: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:15.552: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:15.557: INFO: Unable to read wheezy_udp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:15.562: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:15.566: INFO: Unable to read wheezy_udp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:15.576: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:15.582: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:15.588: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:15.613: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:15.619: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:15.624: INFO: Unable to read jessie_udp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:15.628: INFO: Unable to read jessie_tcp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:15.633: INFO: Unable to read jessie_udp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:15.638: INFO: Unable to read jessie_tcp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:15.644: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:15.649: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:15.677: INFO: Lookups using dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3032 wheezy_tcp@dns-test-service.dns-3032 wheezy_udp@dns-test-service.dns-3032.svc wheezy_tcp@dns-test-service.dns-3032.svc wheezy_udp@_http._tcp.dns-test-service.dns-3032.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3032.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3032 jessie_tcp@dns-test-service.dns-3032 jessie_udp@dns-test-service.dns-3032.svc jessie_tcp@dns-test-service.dns-3032.svc jessie_udp@_http._tcp.dns-test-service.dns-3032.svc jessie_tcp@_http._tcp.dns-test-service.dns-3032.svc] + +Aug 3 07:02:20.548: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:20.557: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:20.563: INFO: Unable to read wheezy_udp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:20.570: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:20.579: INFO: Unable to read wheezy_udp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:20.583: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:20.588: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:20.594: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:20.626: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:20.632: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:20.637: INFO: Unable to read jessie_udp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:20.644: INFO: Unable to read jessie_tcp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:20.649: INFO: Unable to read jessie_udp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:20.653: INFO: Unable to read jessie_tcp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:20.658: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:20.663: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:20.688: INFO: Lookups using dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3032 wheezy_tcp@dns-test-service.dns-3032 wheezy_udp@dns-test-service.dns-3032.svc wheezy_tcp@dns-test-service.dns-3032.svc wheezy_udp@_http._tcp.dns-test-service.dns-3032.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3032.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3032 jessie_tcp@dns-test-service.dns-3032 jessie_udp@dns-test-service.dns-3032.svc jessie_tcp@dns-test-service.dns-3032.svc jessie_udp@_http._tcp.dns-test-service.dns-3032.svc jessie_tcp@_http._tcp.dns-test-service.dns-3032.svc] + +Aug 3 07:02:25.547: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:25.556: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:25.562: INFO: Unable to read wheezy_udp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:25.569: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:25.580: INFO: Unable to read wheezy_udp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:25.586: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:25.598: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:25.604: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:25.646: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:25.653: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:25.658: INFO: Unable to read jessie_udp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:25.664: INFO: Unable to read jessie_tcp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:25.671: INFO: Unable to read jessie_udp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:25.681: INFO: Unable to read jessie_tcp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:25.687: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:25.694: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:25.728: INFO: Lookups using dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3032 wheezy_tcp@dns-test-service.dns-3032 wheezy_udp@dns-test-service.dns-3032.svc wheezy_tcp@dns-test-service.dns-3032.svc wheezy_udp@_http._tcp.dns-test-service.dns-3032.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3032.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3032 jessie_tcp@dns-test-service.dns-3032 jessie_udp@dns-test-service.dns-3032.svc jessie_tcp@dns-test-service.dns-3032.svc jessie_udp@_http._tcp.dns-test-service.dns-3032.svc jessie_tcp@_http._tcp.dns-test-service.dns-3032.svc] + +Aug 3 07:02:30.547: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:30.551: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:30.558: INFO: Unable to read wheezy_udp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:30.564: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:30.570: INFO: Unable to read wheezy_udp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:30.575: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:30.581: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:30.587: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:30.622: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:30.629: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:30.634: INFO: Unable to read jessie_udp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:30.639: INFO: Unable to read jessie_tcp@dns-test-service.dns-3032 from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:30.646: INFO: Unable to read jessie_udp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:30.651: INFO: Unable to read jessie_tcp@dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:30.656: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:30.661: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3032.svc from pod dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae: the server could not find the requested resource (get pods dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae) +Aug 3 07:02:30.680: INFO: Lookups using dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3032 wheezy_tcp@dns-test-service.dns-3032 wheezy_udp@dns-test-service.dns-3032.svc wheezy_tcp@dns-test-service.dns-3032.svc wheezy_udp@_http._tcp.dns-test-service.dns-3032.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3032.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3032 jessie_tcp@dns-test-service.dns-3032 jessie_udp@dns-test-service.dns-3032.svc jessie_tcp@dns-test-service.dns-3032.svc jessie_udp@_http._tcp.dns-test-service.dns-3032.svc jessie_tcp@_http._tcp.dns-test-service.dns-3032.svc] + +Aug 3 07:02:35.660: INFO: DNS probes using dns-3032/dns-test-5a4fdb5c-e7ae-446f-9dc6-bd6a47b9b2ae succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:02:35.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-3032" for this suite. + +• [SLOW TEST:41.540 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":346,"completed":116,"skipped":2030,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:02:35.819: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test override all +Aug 3 07:02:35.874: INFO: Waiting up to 5m0s for pod "client-containers-a384a0e8-26f1-4373-8194-227fdda58451" in namespace "containers-1083" to be "Succeeded or Failed" +Aug 3 07:02:35.886: INFO: Pod "client-containers-a384a0e8-26f1-4373-8194-227fdda58451": Phase="Pending", Reason="", readiness=false. Elapsed: 11.415665ms +Aug 3 07:02:37.896: INFO: Pod "client-containers-a384a0e8-26f1-4373-8194-227fdda58451": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02160438s +Aug 3 07:02:39.913: INFO: Pod "client-containers-a384a0e8-26f1-4373-8194-227fdda58451": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038546307s +STEP: Saw pod success +Aug 3 07:02:39.913: INFO: Pod "client-containers-a384a0e8-26f1-4373-8194-227fdda58451" satisfied condition "Succeeded or Failed" +Aug 3 07:02:39.924: INFO: Trying to get logs from node dce-10-6-213-50 pod client-containers-a384a0e8-26f1-4373-8194-227fdda58451 container agnhost-container: +STEP: delete the pod +Aug 3 07:02:39.986: INFO: Waiting for pod client-containers-a384a0e8-26f1-4373-8194-227fdda58451 to disappear +Aug 3 07:02:39.999: INFO: Pod client-containers-a384a0e8-26f1-4373-8194-227fdda58451 no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:02:39.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-1083" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":346,"completed":117,"skipped":2050,"failed":0} +SSS +------------------------------ +[sig-instrumentation] Events + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:02:40.038: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename events +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a test event +STEP: listing all events in all namespaces +STEP: patching the test event +STEP: fetching the test event +STEP: deleting the test event +STEP: listing all events in all namespaces +[AfterEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:02:40.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-3634" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":346,"completed":118,"skipped":2053,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Should recreate evicted statefulset [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:02:40.435: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 +STEP: Creating service test in namespace statefulset-9646 +[It] Should recreate evicted statefulset [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Looking for a node to schedule stateful set and pod +STEP: Creating pod with conflicting port in namespace statefulset-9646 +STEP: Waiting until pod test-pod will start running in namespace statefulset-9646 +STEP: Creating statefulset with conflicting port in namespace statefulset-9646 +STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9646 +Aug 3 07:02:44.619: INFO: Observed stateful pod in namespace: statefulset-9646, name: ss-0, uid: 0f18b730-e7a6-49cf-8ea8-370ef0c56ce7, status phase: Pending. Waiting for statefulset controller to delete. +Aug 3 07:02:44.646: INFO: Observed stateful pod in namespace: statefulset-9646, name: ss-0, uid: 0f18b730-e7a6-49cf-8ea8-370ef0c56ce7, status phase: Failed. Waiting for statefulset controller to delete. +Aug 3 07:02:44.675: INFO: Observed stateful pod in namespace: statefulset-9646, name: ss-0, uid: 0f18b730-e7a6-49cf-8ea8-370ef0c56ce7, status phase: Failed. Waiting for statefulset controller to delete. +Aug 3 07:02:44.687: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9646 +STEP: Removing pod with conflicting port in namespace statefulset-9646 +STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9646 and will be in running state +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 +Aug 3 07:02:48.767: INFO: Deleting all statefulset in ns statefulset-9646 +Aug 3 07:02:48.772: INFO: Scaling statefulset ss to 0 +Aug 3 07:02:58.803: INFO: Waiting for statefulset status.replicas updated to 0 +Aug 3 07:02:58.809: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:02:58.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-9646" for this suite. + +• [SLOW TEST:18.423 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 + Should recreate evicted statefulset [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":346,"completed":119,"skipped":2068,"failed":0} +[sig-network] Services + should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:02:58.858: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service nodeport-test with type=NodePort in namespace services-8807 +STEP: creating replication controller nodeport-test in namespace services-8807 +I0803 07:02:58.940205 21 runners.go:193] Created replication controller with name: nodeport-test, namespace: services-8807, replica count: 2 +I0803 07:03:01.994307 21 runners.go:193] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 3 07:03:04.995: INFO: Creating new exec pod +I0803 07:03:04.995144 21 runners.go:193] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 3 07:03:12.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-8807 exec execpoddwcfw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Aug 3 07:03:12.414: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Aug 3 07:03:12.414: INFO: stdout: "" +Aug 3 07:03:13.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-8807 exec execpoddwcfw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Aug 3 07:03:13.820: INFO: stderr: "+ nc -v -t -w 2 nodeport-test 80\n+ echo hostName\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Aug 3 07:03:13.820: INFO: stdout: "" +Aug 3 07:03:14.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-8807 exec execpoddwcfw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Aug 3 07:03:14.766: INFO: stderr: "+ nc -v -t -w 2 nodeport-test 80\n+ echo hostName\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Aug 3 07:03:14.766: INFO: stdout: "" +Aug 3 07:03:15.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-8807 exec execpoddwcfw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Aug 3 07:03:15.682: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Aug 3 07:03:15.682: INFO: stdout: "" +Aug 3 07:03:16.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-8807 exec execpoddwcfw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Aug 3 07:03:16.671: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Aug 3 07:03:16.671: INFO: stdout: "" +Aug 3 07:03:17.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-8807 exec execpoddwcfw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Aug 3 07:03:17.694: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Aug 3 07:03:17.694: INFO: stdout: "nodeport-test-mdv59" +Aug 3 07:03:17.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-8807 exec execpoddwcfw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.246.157 80' +Aug 3 07:03:17.986: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.246.157 80\nConnection to 172.31.246.157 80 port [tcp/http] succeeded!\n" +Aug 3 07:03:17.987: INFO: stdout: "" +Aug 3 07:03:18.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-8807 exec execpoddwcfw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.246.157 80' +Aug 3 07:03:19.263: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.246.157 80\nConnection to 172.31.246.157 80 port [tcp/http] succeeded!\n" +Aug 3 07:03:19.263: INFO: stdout: "" +Aug 3 07:03:19.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-8807 exec execpoddwcfw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.246.157 80' +Aug 3 07:03:20.258: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.246.157 80\nConnection to 172.31.246.157 80 port [tcp/http] succeeded!\n" +Aug 3 07:03:20.258: INFO: stdout: "nodeport-test-mdv59" +Aug 3 07:03:20.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-8807 exec execpoddwcfw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.6.213.40 32712' +Aug 3 07:03:20.527: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.6.213.40 32712\nConnection to 10.6.213.40 32712 port [tcp/*] succeeded!\n" +Aug 3 07:03:20.527: INFO: stdout: "nodeport-test-mdv59" +Aug 3 07:03:20.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-8807 exec execpoddwcfw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.6.213.50 32712' +Aug 3 07:03:20.806: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.6.213.50 32712\nConnection to 10.6.213.50 32712 port [tcp/*] succeeded!\n" +Aug 3 07:03:20.806: INFO: stdout: "nodeport-test-w9bds" +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:03:20.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-8807" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:21.975 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":346,"completed":120,"skipped":2068,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:03:20.834: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-map-9546298e-cc87-4140-b3a7-a952979f2e38 +STEP: Creating a pod to test consume secrets +Aug 3 07:03:20.930: INFO: Waiting up to 5m0s for pod "pod-secrets-5cb021e0-da2e-4f47-99e8-0a6b69357c7b" in namespace "secrets-7903" to be "Succeeded or Failed" +Aug 3 07:03:20.946: INFO: Pod "pod-secrets-5cb021e0-da2e-4f47-99e8-0a6b69357c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.327162ms +Aug 3 07:03:22.956: INFO: Pod "pod-secrets-5cb021e0-da2e-4f47-99e8-0a6b69357c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026381294s +Aug 3 07:03:24.969: INFO: Pod "pod-secrets-5cb021e0-da2e-4f47-99e8-0a6b69357c7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03913332s +STEP: Saw pod success +Aug 3 07:03:24.969: INFO: Pod "pod-secrets-5cb021e0-da2e-4f47-99e8-0a6b69357c7b" satisfied condition "Succeeded or Failed" +Aug 3 07:03:24.973: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-secrets-5cb021e0-da2e-4f47-99e8-0a6b69357c7b container secret-volume-test: +STEP: delete the pod +Aug 3 07:03:25.008: INFO: Waiting for pod pod-secrets-5cb021e0-da2e-4f47-99e8-0a6b69357c7b to disappear +Aug 3 07:03:25.013: INFO: Pod pod-secrets-5cb021e0-da2e-4f47-99e8-0a6b69357c7b no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:03:25.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-7903" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":121,"skipped":2076,"failed":0} + +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:03:25.028: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Aug 3 07:03:25.078: INFO: Waiting up to 5m0s for pod "downwardapi-volume-030cfa8c-9591-4969-8dee-c850c634edbd" in namespace "downward-api-5228" to be "Succeeded or Failed" +Aug 3 07:03:25.085: INFO: Pod "downwardapi-volume-030cfa8c-9591-4969-8dee-c850c634edbd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.823551ms +Aug 3 07:03:27.092: INFO: Pod "downwardapi-volume-030cfa8c-9591-4969-8dee-c850c634edbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014114735s +Aug 3 07:03:29.105: INFO: Pod "downwardapi-volume-030cfa8c-9591-4969-8dee-c850c634edbd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026413312s +Aug 3 07:03:31.118: INFO: Pod "downwardapi-volume-030cfa8c-9591-4969-8dee-c850c634edbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039233497s +STEP: Saw pod success +Aug 3 07:03:31.118: INFO: Pod "downwardapi-volume-030cfa8c-9591-4969-8dee-c850c634edbd" satisfied condition "Succeeded or Failed" +Aug 3 07:03:31.122: INFO: Trying to get logs from node dce-10-6-213-50 pod downwardapi-volume-030cfa8c-9591-4969-8dee-c850c634edbd container client-container: +STEP: delete the pod +Aug 3 07:03:31.152: INFO: Waiting for pod downwardapi-volume-030cfa8c-9591-4969-8dee-c850c634edbd to disappear +Aug 3 07:03:31.159: INFO: Pod downwardapi-volume-030cfa8c-9591-4969-8dee-c850c634edbd no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:03:31.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-5228" for this suite. + +• [SLOW TEST:6.155 seconds] +[sig-storage] Downward API volume +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":122,"skipped":2076,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with terminating scopes. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:03:31.184: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify ResourceQuota with terminating scopes. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ResourceQuota with terminating scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a ResourceQuota with not terminating scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a long running pod +STEP: Ensuring resource quota with not terminating scope captures the pod usage +STEP: Ensuring resource quota with terminating scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +STEP: Creating a terminating pod +STEP: Ensuring resource quota with terminating scope captures the pod usage +STEP: Ensuring resource quota with not terminating scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:03:47.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-9524" for this suite. + +• [SLOW TEST:16.240 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should verify ResourceQuota with terminating scopes. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":346,"completed":123,"skipped":2083,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl describe + should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:03:47.426: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:03:47.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-2830 create -f -' +Aug 3 07:03:48.794: INFO: stderr: "" +Aug 3 07:03:48.794: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +Aug 3 07:03:48.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-2830 create -f -' +Aug 3 07:03:50.293: INFO: stderr: "" +Aug 3 07:03:50.293: INFO: stdout: "service/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Aug 3 07:03:51.301: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 3 07:03:51.301: INFO: Found 0 / 1 +Aug 3 07:03:52.314: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 3 07:03:52.314: INFO: Found 1 / 1 +Aug 3 07:03:52.314: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Aug 3 07:03:52.324: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 3 07:03:52.324: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Aug 3 07:03:52.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-2830 describe pod agnhost-primary-zkszt' +Aug 3 07:03:52.596: INFO: stderr: "" +Aug 3 07:03:52.596: INFO: stdout: "Name: agnhost-primary-zkszt\nNamespace: kubectl-2830\nPriority: 0\nNode: dce-10-6-213-50/10.6.213.50\nStart Time: Wed, 03 Aug 2022 07:03:48 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: cni.projectcalico.org/ipv4pools: [\"default-ipv4-ippool\"]\n dce.daocloud.io/parcel.egress.burst: 0\n dce.daocloud.io/parcel.egress.rate: 0\n dce.daocloud.io/parcel.ingress.burst: 0\n dce.daocloud.io/parcel.ingress.rate: 0\nStatus: Running\nIP: 172.29.175.24\nIPs:\n IP: 172.29.175.24\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: docker://15ce86548c8904501c7c323b63881d700aa5ed75fe25392239629477be162ac8\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.33\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 03 Aug 2022 07:03:51 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8prtv (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-8prtv:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-2830/agnhost-primary-zkszt to dce-10-6-213-50\n Normal Pulled 1s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.33\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" +Aug 3 07:03:52.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-2830 describe rc agnhost-primary' +Aug 3 07:03:52.831: INFO: stderr: "" +Aug 3 07:03:52.831: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-2830\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.33\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-primary-zkszt\n" +Aug 3 07:03:52.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-2830 describe service agnhost-primary' +Aug 3 07:03:53.088: INFO: stderr: "" +Aug 3 07:03:53.088: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-2830\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 172.31.8.232\nIPs: 172.31.8.232\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 172.29.175.24:6379\nSession Affinity: None\nEvents: \n" +Aug 3 07:03:53.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-2830 describe node dce-10-6-213-10' +Aug 3 07:03:53.392: INFO: stderr: "" +Aug 3 07:03:53.392: INFO: stdout: "Name: dce-10-6-213-10\nRoles: master,registry\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=dce-10-6-213-10\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\n node-role.kubernetes.io/registry=\nAnnotations: node.alpha.kubernetes.io/ttl: 0\n uds.dce.daocloud.io/iqn: iqn.1994-05.com.redhat:7df04f11913\n uds.dce.daocloud.io/storage-ipv4: 10.6.213.10\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Mon, 01 Aug 2022 07:04:37 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: dce-10-6-213-10\n AcquireTime: \n RenewTime: Wed, 03 Aug 2022 07:03:49 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n DCEKubeApiServerProxyNotReady False Wed, 03 Aug 2022 07:02:54 +0000 Wed, 03 Aug 2022 05:21:36 +0000 DCEKubeApiServerProxyReady DCE kube apiserver proxy is posting ready status.\n TimeNotSynchronized False Wed, 03 Aug 2022 07:02:54 +0000 Wed, 03 Aug 2022 05:21:36 +0000 TimeSynchronized The time of the node is synchronized\n DCEEngineNotReady False Wed, 03 Aug 2022 07:02:54 +0000 Wed, 03 Aug 2022 05:21:36 +0000 DCEEngineReady DCE engine is posting ready status.\n DockerDiskPressure False Wed, 03 Aug 2022 07:02:54 +0000 Wed, 03 Aug 2022 05:21:36 +0000 DockerHasNoDiskPressure docker has no disk pressure\n NetworkUnavailable False Tue, 02 Aug 2022 09:20:59 +0000 Tue, 02 Aug 2022 09:20:59 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Wed, 03 Aug 2022 07:03:51 +0000 Mon, 01 Aug 2022 12:16:08 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 03 Aug 2022 07:03:51 +0000 Mon, 01 Aug 2022 12:16:08 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 03 Aug 2022 07:03:51 +0000 Mon, 01 Aug 2022 12:16:08 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 03 Aug 2022 07:03:51 +0000 Tue, 02 Aug 2022 09:29:54 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.6.213.10\n Hostname: dce-10-6-213-10\nCapacity:\n cpu: 8\n ephemeral-storage: 102350Mi\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 16266108Ki\n pods: 110\nAllocatable:\n cpu: 7500m\n ephemeral-storage: 96589578081\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 15639420Ki\n pods: 110\nSystem Info:\n Machine ID: 560ded7def0240d394131ad9e58f5e11\n System UUID: 64E53442-D1E8-35CD-2817-90FE567E11E9\n Boot ID: 7432fdfa-a8c8-4bf0-8506-890521f6f692\n Kernel Version: 3.10.0-957.27.2.el7.x86_64\n OS Image: CentOS Linux 7 (Core)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://20.10.7\n Kubelet Version: v1.23.3\n Kube-Proxy Version: v1.23.3\nPodCIDR: 172.30.0.0/24\nPodCIDRs: 172.30.0.0/24\nNon-terminated Pods: (19 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system calico-kube-controllers-77d4f75847-xs5vh 100m (1%) 400m (5%) 200Mi (1%) 400Mi (2%) 21h\n kube-system calico-node-hz4ng 200m (2%) 800m (10%) 200Mi (1%) 400Mi (2%) 21h\n kube-system dce-chart-manager-f9f8b657b-zz9m4 200m (2%) 1 (13%) 400Mi (2%) 1000Mi (6%) 21h\n kube-system dce-core-keepalived-7c69596969-8x4xd 100m (1%) 400m (5%) 200Mi (1%) 400Mi (2%) 41h\n kube-system dce-engine-rsj77 80m (1%) 400m (5%) 200Mi (1%) 400Mi (2%) 21h\n kube-system dce-etcd-dce-10-6-213-10 350m (4%) 1400m (18%) 1Gi (6%) 8Gi (53%) 41h\n kube-system dce-kube-apiserver-dce-10-6-213-10 350m (4%) 1400m (18%) 717Mi (4%) 8Gi (53%) 103m\n kube-system dce-kube-apiserver-proxy-dce-10-6-213-10 100m (1%) 400m (5%) 150Mi (0%) 300Mi (1%) 41h\n kube-system dce-kube-controller-manager-dce-10-6-213-10 180m (2%) 720m (9%) 256Mi (1%) 512Mi (3%) 41h\n kube-system dce-kube-scheduler-dce-10-6-213-10 180m (2%) 720m (9%) 256Mi (1%) 8Gi (53%) 41h\n kube-system dce-parcel-agent-4rv4b 200m (2%) 1 (13%) 150Mi (0%) 1000Mi (6%) 47h\n kube-system dce-parcel-server-5hglx 200m (2%) 1 (13%) 400Mi (2%) 1200Mi (7%) 47h\n kube-system dce-prometheus-96c789c9b-4gn2k 125m (1%) 450m (6%) 275Mi (1%) 550Mi (3%) 21h\n kube-system dce-registry-9699c65bd-zfdg2 250m (3%) 1500m (20%) 400Mi (2%) 1538Mi (10%) 21h\n kube-system dce-uds-failover-assistant-7d8c4779ff-pc9cm 50m (0%) 100m (1%) 50Mi (0%) 80Mi (0%) 21h\n kube-system dce-uds-host-driver-bvzrp 100m (1%) 200m (2%) 100Mi (0%) 200Mi (1%) 21h\n kube-system kube-proxy-tmrsn 100m (1%) 400m (5%) 300Mi (1%) 600Mi (3%) 47h\n kube-system node-local-dns-rjv9l 250m (3%) 1 (13%) 250M (1%) 250M (1%) 47h\n sonobuoy sonobuoy-systemd-logs-daemon-set-10147ad5bf5a4ba1-gqpdl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 47m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 3115m (41%) 13290m (177%)\n memory 5784384128 (36%) 35016585856 (218%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" +Aug 3 07:03:53.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-2830 describe namespace kubectl-2830' +Aug 3 07:03:53.525: INFO: stderr: "" +Aug 3 07:03:53.525: INFO: stdout: "Name: kubectl-2830\nLabels: e2e-framework=kubectl\n e2e-run=f3f10412-cf7f-4e50-98c6-dad5df587000\n kubernetes.io/metadata.name=kubectl-2830\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:03:53.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2830" for this suite. + +• [SLOW TEST:6.132 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Kubectl describe + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1107 + should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":346,"completed":124,"skipped":2118,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:03:53.559: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 3 07:03:54.097: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Aug 3 07:03:56.125: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 3, 54, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 3, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 3, 54, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 3, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 3 07:03:59.153: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:03:59.164: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4322-crds.webhook.example.com via the AdmissionRegistration API +Aug 3 07:04:09.737: INFO: Waiting for webhook configuration to be ready... +STEP: Creating a custom resource that should be mutated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:04:10.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3367" for this suite. +STEP: Destroying namespace "webhook-3367-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:16.942 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":346,"completed":125,"skipped":2137,"failed":0} +S +------------------------------ +[sig-cli] Kubectl client Update Demo + should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:04:10.502: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 +[It] should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a replication controller +Aug 3 07:04:10.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 create -f -' +Aug 3 07:04:11.985: INFO: stderr: "" +Aug 3 07:04:11.985: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Aug 3 07:04:11.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Aug 3 07:04:12.148: INFO: stderr: "" +Aug 3 07:04:12.148: INFO: stdout: "update-demo-nautilus-jl8zv update-demo-nautilus-jnbnb " +Aug 3 07:04:12.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 get pods update-demo-nautilus-jl8zv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Aug 3 07:04:12.251: INFO: stderr: "" +Aug 3 07:04:12.251: INFO: stdout: "" +Aug 3 07:04:12.251: INFO: update-demo-nautilus-jl8zv is created but not running +Aug 3 07:04:17.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Aug 3 07:04:17.374: INFO: stderr: "" +Aug 3 07:04:17.374: INFO: stdout: "update-demo-nautilus-jl8zv update-demo-nautilus-jnbnb " +Aug 3 07:04:17.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 get pods update-demo-nautilus-jl8zv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Aug 3 07:04:17.485: INFO: stderr: "" +Aug 3 07:04:17.485: INFO: stdout: "true" +Aug 3 07:04:17.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 get pods update-demo-nautilus-jl8zv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Aug 3 07:04:17.577: INFO: stderr: "" +Aug 3 07:04:17.577: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" +Aug 3 07:04:17.577: INFO: validating pod update-demo-nautilus-jl8zv +Aug 3 07:04:17.587: INFO: got data: { + "image": "nautilus.jpg" +} + +Aug 3 07:04:17.587: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Aug 3 07:04:17.587: INFO: update-demo-nautilus-jl8zv is verified up and running +Aug 3 07:04:17.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 get pods update-demo-nautilus-jnbnb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Aug 3 07:04:17.698: INFO: stderr: "" +Aug 3 07:04:17.698: INFO: stdout: "true" +Aug 3 07:04:17.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 get pods update-demo-nautilus-jnbnb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Aug 3 07:04:17.809: INFO: stderr: "" +Aug 3 07:04:17.809: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" +Aug 3 07:04:17.809: INFO: validating pod update-demo-nautilus-jnbnb +Aug 3 07:04:17.818: INFO: got data: { + "image": "nautilus.jpg" +} + +Aug 3 07:04:17.818: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Aug 3 07:04:17.818: INFO: update-demo-nautilus-jnbnb is verified up and running +STEP: scaling down the replication controller +Aug 3 07:04:17.821: INFO: scanned /root for discovery docs: +Aug 3 07:04:17.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 scale rc update-demo-nautilus --replicas=1 --timeout=5m' +Aug 3 07:04:18.986: INFO: stderr: "" +Aug 3 07:04:18.986: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Aug 3 07:04:18.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Aug 3 07:04:19.138: INFO: stderr: "" +Aug 3 07:04:19.138: INFO: stdout: "update-demo-nautilus-jl8zv update-demo-nautilus-jnbnb " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Aug 3 07:04:24.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Aug 3 07:04:24.286: INFO: stderr: "" +Aug 3 07:04:24.286: INFO: stdout: "update-demo-nautilus-jnbnb " +Aug 3 07:04:24.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 get pods update-demo-nautilus-jnbnb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Aug 3 07:04:24.421: INFO: stderr: "" +Aug 3 07:04:24.421: INFO: stdout: "true" +Aug 3 07:04:24.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 get pods update-demo-nautilus-jnbnb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Aug 3 07:04:24.549: INFO: stderr: "" +Aug 3 07:04:24.549: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" +Aug 3 07:04:24.549: INFO: validating pod update-demo-nautilus-jnbnb +Aug 3 07:04:24.555: INFO: got data: { + "image": "nautilus.jpg" +} + +Aug 3 07:04:24.555: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Aug 3 07:04:24.555: INFO: update-demo-nautilus-jnbnb is verified up and running +STEP: scaling up the replication controller +Aug 3 07:04:24.557: INFO: scanned /root for discovery docs: +Aug 3 07:04:24.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 scale rc update-demo-nautilus --replicas=2 --timeout=5m' +Aug 3 07:04:25.763: INFO: stderr: "" +Aug 3 07:04:25.763: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Aug 3 07:04:25.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Aug 3 07:04:25.879: INFO: stderr: "" +Aug 3 07:04:25.879: INFO: stdout: "update-demo-nautilus-ckc9t update-demo-nautilus-jnbnb " +Aug 3 07:04:25.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 get pods update-demo-nautilus-ckc9t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Aug 3 07:04:26.012: INFO: stderr: "" +Aug 3 07:04:26.012: INFO: stdout: "" +Aug 3 07:04:26.012: INFO: update-demo-nautilus-ckc9t is created but not running +Aug 3 07:04:31.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Aug 3 07:04:31.125: INFO: stderr: "" +Aug 3 07:04:31.125: INFO: stdout: "update-demo-nautilus-ckc9t update-demo-nautilus-jnbnb " +Aug 3 07:04:31.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 get pods update-demo-nautilus-ckc9t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Aug 3 07:04:31.241: INFO: stderr: "" +Aug 3 07:04:31.241: INFO: stdout: "true" +Aug 3 07:04:31.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 get pods update-demo-nautilus-ckc9t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Aug 3 07:04:31.349: INFO: stderr: "" +Aug 3 07:04:31.349: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" +Aug 3 07:04:31.349: INFO: validating pod update-demo-nautilus-ckc9t +Aug 3 07:04:31.358: INFO: got data: { + "image": "nautilus.jpg" +} + +Aug 3 07:04:31.358: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Aug 3 07:04:31.358: INFO: update-demo-nautilus-ckc9t is verified up and running +Aug 3 07:04:31.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 get pods update-demo-nautilus-jnbnb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Aug 3 07:04:31.472: INFO: stderr: "" +Aug 3 07:04:31.472: INFO: stdout: "true" +Aug 3 07:04:31.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 get pods update-demo-nautilus-jnbnb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Aug 3 07:04:31.588: INFO: stderr: "" +Aug 3 07:04:31.589: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" +Aug 3 07:04:31.589: INFO: validating pod update-demo-nautilus-jnbnb +Aug 3 07:04:31.595: INFO: got data: { + "image": "nautilus.jpg" +} + +Aug 3 07:04:31.596: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Aug 3 07:04:31.596: INFO: update-demo-nautilus-jnbnb is verified up and running +STEP: using delete to clean up resources +Aug 3 07:04:31.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 delete --grace-period=0 --force -f -' +Aug 3 07:04:31.764: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Aug 3 07:04:31.764: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Aug 3 07:04:31.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 get rc,svc -l name=update-demo --no-headers' +Aug 3 07:04:31.907: INFO: stderr: "No resources found in kubectl-5297 namespace.\n" +Aug 3 07:04:31.907: INFO: stdout: "" +Aug 3 07:04:31.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5297 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Aug 3 07:04:32.010: INFO: stderr: "" +Aug 3 07:04:32.010: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:04:32.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5297" for this suite. + +• [SLOW TEST:21.531 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294 + should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":346,"completed":126,"skipped":2138,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:04:32.033: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Aug 3 07:04:32.109: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:04:38.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-6128" for this suite. + +• [SLOW TEST:6.114 seconds] +[sig-node] InitContainer [NodeConformance] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":346,"completed":127,"skipped":2176,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:04:38.148: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-map-9d44ac23-0c47-434e-a451-3a2b2375c583 +STEP: Creating a pod to test consume configMaps +Aug 3 07:04:38.216: INFO: Waiting up to 5m0s for pod "pod-configmaps-26604482-7fbc-40de-9085-75657ec31299" in namespace "configmap-2990" to be "Succeeded or Failed" +Aug 3 07:04:38.227: INFO: Pod "pod-configmaps-26604482-7fbc-40de-9085-75657ec31299": Phase="Pending", Reason="", readiness=false. Elapsed: 10.795222ms +Aug 3 07:04:40.239: INFO: Pod "pod-configmaps-26604482-7fbc-40de-9085-75657ec31299": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023292845s +Aug 3 07:04:42.251: INFO: Pod "pod-configmaps-26604482-7fbc-40de-9085-75657ec31299": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035188206s +Aug 3 07:04:44.261: INFO: Pod "pod-configmaps-26604482-7fbc-40de-9085-75657ec31299": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045320511s +STEP: Saw pod success +Aug 3 07:04:44.261: INFO: Pod "pod-configmaps-26604482-7fbc-40de-9085-75657ec31299" satisfied condition "Succeeded or Failed" +Aug 3 07:04:44.266: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-configmaps-26604482-7fbc-40de-9085-75657ec31299 container agnhost-container: +STEP: delete the pod +Aug 3 07:04:44.328: INFO: Waiting for pod pod-configmaps-26604482-7fbc-40de-9085-75657ec31299 to disappear +Aug 3 07:04:44.333: INFO: Pod pod-configmaps-26604482-7fbc-40de-9085-75657ec31299 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:04:44.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2990" for this suite. + +• [SLOW TEST:6.202 seconds] +[sig-storage] ConfigMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":128,"skipped":2184,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates basic preemption works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:04:44.351: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename sched-preemption +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Aug 3 07:04:44.408: INFO: Waiting up to 1m0s for all nodes to be ready +Aug 3 07:05:44.497: INFO: Waiting for terminating namespaces to be deleted... +[It] validates basic preemption works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create pods that use 4/5 of node resources. +Aug 3 07:05:44.529: INFO: Created pod: pod0-0-sched-preemption-low-priority +Aug 3 07:05:44.537: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Aug 3 07:05:44.559: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Aug 3 07:05:44.570: INFO: Created pod: pod1-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. +STEP: Run a high priority pod that has same requirements as that of lower priority pod +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:06:00.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-6437" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 + +• [SLOW TEST:76.431 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates basic preemption works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":346,"completed":129,"skipped":2205,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should replace jobs when ReplaceConcurrent [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:06:00.783: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename cronjob +STEP: Waiting for a default service account to be provisioned in namespace +[It] should replace jobs when ReplaceConcurrent [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ReplaceConcurrent cronjob +STEP: Ensuring a job is scheduled +STEP: Ensuring exactly one is scheduled +STEP: Ensuring exactly one running job exists by listing jobs explicitly +STEP: Ensuring the job is replaced with a new one +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:08:00.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-8951" for this suite. + +• [SLOW TEST:120.135 seconds] +[sig-apps] CronJob +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should replace jobs when ReplaceConcurrent [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":346,"completed":130,"skipped":2269,"failed":0} +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:08:00.918: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:08:05.044: INFO: Deleting pod "var-expansion-c9f693e9-b8fe-46f7-a910-855acc5b1f8b" in namespace "var-expansion-5406" +Aug 3 07:08:05.053: INFO: Wait up to 5m0s for pod "var-expansion-c9f693e9-b8fe-46f7-a910-855acc5b1f8b" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:08:13.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-5406" for this suite. + +• [SLOW TEST:12.172 seconds] +[sig-node] Variable Expansion +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":346,"completed":131,"skipped":2269,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:08:13.091: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir volume type on node default medium +Aug 3 07:08:13.153: INFO: Waiting up to 5m0s for pod "pod-cedd01bd-71a9-409d-bf17-51968095a2d1" in namespace "emptydir-8934" to be "Succeeded or Failed" +Aug 3 07:08:13.159: INFO: Pod "pod-cedd01bd-71a9-409d-bf17-51968095a2d1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.497325ms +Aug 3 07:08:15.172: INFO: Pod "pod-cedd01bd-71a9-409d-bf17-51968095a2d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018864367s +Aug 3 07:08:17.195: INFO: Pod "pod-cedd01bd-71a9-409d-bf17-51968095a2d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042540696s +STEP: Saw pod success +Aug 3 07:08:17.196: INFO: Pod "pod-cedd01bd-71a9-409d-bf17-51968095a2d1" satisfied condition "Succeeded or Failed" +Aug 3 07:08:17.204: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-cedd01bd-71a9-409d-bf17-51968095a2d1 container test-container: +STEP: delete the pod +Aug 3 07:08:17.284: INFO: Waiting for pod pod-cedd01bd-71a9-409d-bf17-51968095a2d1 to disappear +Aug 3 07:08:17.302: INFO: Pod pod-cedd01bd-71a9-409d-bf17-51968095a2d1 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:08:17.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-8934" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":132,"skipped":2286,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop simple daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:08:17.323: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:143 +[It] should run and stop simple daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Aug 3 07:08:17.437: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:17.437: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:17.437: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:17.442: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 07:08:17.442: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 07:08:18.462: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:18.462: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:18.462: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:18.488: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 07:08:18.488: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 07:08:19.451: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:19.451: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:19.451: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:19.457: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 07:08:19.457: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 07:08:20.455: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:20.455: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:20.455: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:20.460: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 07:08:20.460: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 07:08:21.452: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:21.453: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:21.453: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:21.457: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Aug 3 07:08:21.457: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +STEP: Stop a daemon pod, check that the daemon pod is revived. +Aug 3 07:08:21.494: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:21.494: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:21.494: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:21.498: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 3 07:08:21.498: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 07:08:22.512: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:22.512: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:22.512: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:22.520: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 3 07:08:22.520: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 07:08:23.512: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:23.512: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:23.512: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:23.521: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 3 07:08:23.521: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 07:08:24.513: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:24.514: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:24.514: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:24.520: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 3 07:08:24.520: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 07:08:25.516: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:25.516: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:25.516: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:25.522: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 3 07:08:25.522: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 07:08:26.515: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:26.515: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:26.515: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:26.523: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 3 07:08:26.523: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 07:08:27.511: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:27.512: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:27.512: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:27.519: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 3 07:08:27.519: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 07:08:28.515: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:28.515: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:28.516: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:28.522: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 3 07:08:28.522: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 07:08:29.511: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:29.511: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:29.511: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:29.519: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 3 07:08:29.519: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 07:08:30.515: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:30.515: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:30.515: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:08:30.520: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Aug 3 07:08:30.520: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:109 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4037, will wait for the garbage collector to delete the pods +Aug 3 07:08:30.593: INFO: Deleting DaemonSet.extensions daemon-set took: 15.520838ms +Aug 3 07:08:30.694: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.880479ms +Aug 3 07:08:36.017: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 07:08:36.017: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Aug 3 07:08:36.022: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"618208"},"items":null} + +Aug 3 07:08:36.028: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"618208"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:08:36.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-4037" for this suite. + +• [SLOW TEST:18.737 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should run and stop simple daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":346,"completed":133,"skipped":2318,"failed":0} +S +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with pruning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:08:36.060: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 3 07:08:36.531: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Aug 3 07:08:38.556: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 8, 36, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 8, 36, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 8, 36, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 8, 36, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 3 07:08:41.584: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with pruning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:08:41.603: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2224-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource that should be mutated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:08:44.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-600" for this suite. +STEP: Destroying namespace "webhook-600-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:8.924 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should mutate custom resource with pruning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":346,"completed":134,"skipped":2319,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:08:44.985: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Aug 3 07:08:45.064: INFO: Waiting up to 5m0s for pod "downward-api-b868aea6-c6d3-4fab-a156-f16734e3c958" in namespace "downward-api-3793" to be "Succeeded or Failed" +Aug 3 07:08:45.069: INFO: Pod "downward-api-b868aea6-c6d3-4fab-a156-f16734e3c958": Phase="Pending", Reason="", readiness=false. Elapsed: 5.605553ms +Aug 3 07:08:47.081: INFO: Pod "downward-api-b868aea6-c6d3-4fab-a156-f16734e3c958": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017207315s +Aug 3 07:08:49.100: INFO: Pod "downward-api-b868aea6-c6d3-4fab-a156-f16734e3c958": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036254054s +STEP: Saw pod success +Aug 3 07:08:49.100: INFO: Pod "downward-api-b868aea6-c6d3-4fab-a156-f16734e3c958" satisfied condition "Succeeded or Failed" +Aug 3 07:08:49.106: INFO: Trying to get logs from node dce-10-6-213-50 pod downward-api-b868aea6-c6d3-4fab-a156-f16734e3c958 container dapi-container: +STEP: delete the pod +Aug 3 07:08:49.143: INFO: Waiting for pod downward-api-b868aea6-c6d3-4fab-a156-f16734e3c958 to disappear +Aug 3 07:08:49.148: INFO: Pod downward-api-b868aea6-c6d3-4fab-a156-f16734e3c958 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:08:49.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3793" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":346,"completed":135,"skipped":2334,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a volume subpath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:08:49.173: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a volume subpath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test substitution in volume subpath +Aug 3 07:08:49.258: INFO: Waiting up to 5m0s for pod "var-expansion-d8e5c312-ef9e-47c6-a5c3-d1b23b8001db" in namespace "var-expansion-3835" to be "Succeeded or Failed" +Aug 3 07:08:49.265: INFO: Pod "var-expansion-d8e5c312-ef9e-47c6-a5c3-d1b23b8001db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.757513ms +Aug 3 07:08:51.276: INFO: Pod "var-expansion-d8e5c312-ef9e-47c6-a5c3-d1b23b8001db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018075034s +Aug 3 07:08:53.288: INFO: Pod "var-expansion-d8e5c312-ef9e-47c6-a5c3-d1b23b8001db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029701278s +Aug 3 07:08:55.303: INFO: Pod "var-expansion-d8e5c312-ef9e-47c6-a5c3-d1b23b8001db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045123829s +STEP: Saw pod success +Aug 3 07:08:55.303: INFO: Pod "var-expansion-d8e5c312-ef9e-47c6-a5c3-d1b23b8001db" satisfied condition "Succeeded or Failed" +Aug 3 07:08:55.312: INFO: Trying to get logs from node dce-10-6-213-50 pod var-expansion-d8e5c312-ef9e-47c6-a5c3-d1b23b8001db container dapi-container: +STEP: delete the pod +Aug 3 07:08:55.358: INFO: Waiting for pod var-expansion-d8e5c312-ef9e-47c6-a5c3-d1b23b8001db to disappear +Aug 3 07:08:55.367: INFO: Pod var-expansion-d8e5c312-ef9e-47c6-a5c3-d1b23b8001db no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:08:55.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-3835" for this suite. + +• [SLOW TEST:6.216 seconds] +[sig-node] Variable Expansion +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should allow substituting values in a volume subpath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":346,"completed":136,"skipped":2409,"failed":0} +SSS +------------------------------ +[sig-instrumentation] Events + should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:08:55.389: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename events +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of events +Aug 3 07:08:55.459: INFO: created test-event-1 +Aug 3 07:08:55.464: INFO: created test-event-2 +Aug 3 07:08:55.470: INFO: created test-event-3 +STEP: get a list of Events with a label in the current namespace +STEP: delete collection of events +Aug 3 07:08:55.474: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity +Aug 3 07:08:55.522: INFO: requesting list of events to confirm quantity +[AfterEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:08:55.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-9598" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":346,"completed":137,"skipped":2412,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:08:55.545: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-6440 +STEP: creating service affinity-nodeport-transition in namespace services-6440 +STEP: creating replication controller affinity-nodeport-transition in namespace services-6440 +I0803 07:08:55.649516 21 runners.go:193] Created replication controller with name: affinity-nodeport-transition, namespace: services-6440, replica count: 3 +I0803 07:08:58.700377 21 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0803 07:09:01.701196 21 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 3 07:09:01.741: INFO: Creating new exec pod +Aug 3 07:09:08.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-6440 exec execpod-affinitycj6rb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' +Aug 3 07:09:09.130: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" +Aug 3 07:09:09.131: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 07:09:09.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-6440 exec execpod-affinitycj6rb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.27.44 80' +Aug 3 07:09:09.498: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.27.44 80\nConnection to 172.31.27.44 80 port [tcp/http] succeeded!\n" +Aug 3 07:09:09.498: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 07:09:09.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-6440 exec execpod-affinitycj6rb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.6.213.40 30424' +Aug 3 07:09:09.802: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.6.213.40 30424\nConnection to 10.6.213.40 30424 port [tcp/*] succeeded!\n" +Aug 3 07:09:09.802: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 07:09:09.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-6440 exec execpod-affinitycj6rb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.6.213.50 30424' +Aug 3 07:09:10.076: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.6.213.50 30424\nConnection to 10.6.213.50 30424 port [tcp/*] succeeded!\n" +Aug 3 07:09:10.076: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 07:09:10.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-6440 exec execpod-affinitycj6rb -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.6.213.40:30424/ ; done' +Aug 3 07:09:10.546: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n" +Aug 3 07:09:10.546: INFO: stdout: "\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf" +Aug 3 07:09:10.546: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:10.546: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:10.546: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:10.546: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:10.546: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:10.546: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:10.546: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:10.546: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:10.546: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:10.546: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:10.546: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:10.546: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:10.546: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:10.546: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:10.546: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:10.546: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:40.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-6440 exec execpod-affinitycj6rb -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.6.213.40:30424/ ; done' +Aug 3 07:09:40.967: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n" +Aug 3 07:09:40.967: INFO: stdout: "\naffinity-nodeport-transition-thnbv\naffinity-nodeport-transition-wpzl5\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-thnbv\naffinity-nodeport-transition-thnbv\naffinity-nodeport-transition-wpzl5\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-thnbv\naffinity-nodeport-transition-thnbv\naffinity-nodeport-transition-wpzl5\naffinity-nodeport-transition-thnbv\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-thnbv\naffinity-nodeport-transition-thnbv\naffinity-nodeport-transition-thnbv" +Aug 3 07:09:40.967: INFO: Received response from host: affinity-nodeport-transition-thnbv +Aug 3 07:09:40.967: INFO: Received response from host: affinity-nodeport-transition-wpzl5 +Aug 3 07:09:40.967: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:40.967: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:40.967: INFO: Received response from host: affinity-nodeport-transition-thnbv +Aug 3 07:09:40.967: INFO: Received response from host: affinity-nodeport-transition-thnbv +Aug 3 07:09:40.967: INFO: Received response from host: affinity-nodeport-transition-wpzl5 +Aug 3 07:09:40.967: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:40.967: INFO: Received response from host: affinity-nodeport-transition-thnbv +Aug 3 07:09:40.967: INFO: Received response from host: affinity-nodeport-transition-thnbv +Aug 3 07:09:40.967: INFO: Received response from host: affinity-nodeport-transition-wpzl5 +Aug 3 07:09:40.967: INFO: Received response from host: affinity-nodeport-transition-thnbv +Aug 3 07:09:40.967: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:40.967: INFO: Received response from host: affinity-nodeport-transition-thnbv +Aug 3 07:09:40.967: INFO: Received response from host: affinity-nodeport-transition-thnbv +Aug 3 07:09:40.967: INFO: Received response from host: affinity-nodeport-transition-thnbv +Aug 3 07:09:40.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-6440 exec execpod-affinitycj6rb -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.6.213.40:30424/ ; done' +Aug 3 07:09:41.412: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:30424/\n" +Aug 3 07:09:41.412: INFO: stdout: "\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf\naffinity-nodeport-transition-9tmcf" +Aug 3 07:09:41.412: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:41.412: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:41.412: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:41.412: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:41.412: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:41.412: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:41.412: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:41.412: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:41.412: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:41.412: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:41.412: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:41.412: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:41.412: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:41.412: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:41.412: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:41.412: INFO: Received response from host: affinity-nodeport-transition-9tmcf +Aug 3 07:09:41.412: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-6440, will wait for the garbage collector to delete the pods +Aug 3 07:09:41.503: INFO: Deleting ReplicationController affinity-nodeport-transition took: 9.405837ms +Aug 3 07:09:41.604: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.510895ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:09:45.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6440" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:50.328 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":138,"skipped":2429,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:09:45.874: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-40a74456-1759-448e-8a99-0a7783aeb3f9 +STEP: Creating a pod to test consume secrets +Aug 3 07:09:45.948: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-918a986a-7a2a-4073-b6a9-bb943190996e" in namespace "projected-8625" to be "Succeeded or Failed" +Aug 3 07:09:45.961: INFO: Pod "pod-projected-secrets-918a986a-7a2a-4073-b6a9-bb943190996e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.791065ms +Aug 3 07:09:47.979: INFO: Pod "pod-projected-secrets-918a986a-7a2a-4073-b6a9-bb943190996e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030867024s +Aug 3 07:09:49.989: INFO: Pod "pod-projected-secrets-918a986a-7a2a-4073-b6a9-bb943190996e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040738311s +Aug 3 07:09:52.000: INFO: Pod "pod-projected-secrets-918a986a-7a2a-4073-b6a9-bb943190996e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052007452s +STEP: Saw pod success +Aug 3 07:09:52.000: INFO: Pod "pod-projected-secrets-918a986a-7a2a-4073-b6a9-bb943190996e" satisfied condition "Succeeded or Failed" +Aug 3 07:09:52.007: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-projected-secrets-918a986a-7a2a-4073-b6a9-bb943190996e container projected-secret-volume-test: +STEP: delete the pod +Aug 3 07:09:52.050: INFO: Waiting for pod pod-projected-secrets-918a986a-7a2a-4073-b6a9-bb943190996e to disappear +Aug 3 07:09:52.055: INFO: Pod pod-projected-secrets-918a986a-7a2a-4073-b6a9-bb943190996e no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:09:52.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8625" for this suite. + +• [SLOW TEST:6.196 seconds] +[sig-storage] Projected secret +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":139,"skipped":2436,"failed":0} +SSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:09:52.070: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-map-02efed4c-c3e7-4d25-9b3e-f59c206ba049 +STEP: Creating a pod to test consume configMaps +Aug 3 07:09:52.154: INFO: Waiting up to 5m0s for pod "pod-configmaps-39f7fd5a-0a9d-4bf8-b71a-01f0f1355771" in namespace "configmap-9021" to be "Succeeded or Failed" +Aug 3 07:09:52.161: INFO: Pod "pod-configmaps-39f7fd5a-0a9d-4bf8-b71a-01f0f1355771": Phase="Pending", Reason="", readiness=false. Elapsed: 7.34658ms +Aug 3 07:09:54.177: INFO: Pod "pod-configmaps-39f7fd5a-0a9d-4bf8-b71a-01f0f1355771": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022615092s +Aug 3 07:09:56.190: INFO: Pod "pod-configmaps-39f7fd5a-0a9d-4bf8-b71a-01f0f1355771": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036125767s +Aug 3 07:09:58.207: INFO: Pod "pod-configmaps-39f7fd5a-0a9d-4bf8-b71a-01f0f1355771": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05294763s +STEP: Saw pod success +Aug 3 07:09:58.207: INFO: Pod "pod-configmaps-39f7fd5a-0a9d-4bf8-b71a-01f0f1355771" satisfied condition "Succeeded or Failed" +Aug 3 07:09:58.214: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-configmaps-39f7fd5a-0a9d-4bf8-b71a-01f0f1355771 container agnhost-container: +STEP: delete the pod +Aug 3 07:09:58.245: INFO: Waiting for pod pod-configmaps-39f7fd5a-0a9d-4bf8-b71a-01f0f1355771 to disappear +Aug 3 07:09:58.249: INFO: Pod pod-configmaps-39f7fd5a-0a9d-4bf8-b71a-01f0f1355771 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:09:58.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-9021" for this suite. + +• [SLOW TEST:6.192 seconds] +[sig-storage] ConfigMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":140,"skipped":2441,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:09:58.263: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 +[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod liveness-b11b374c-de9f-453a-b914-1698c7f0a647 in namespace container-probe-6590 +Aug 3 07:10:04.332: INFO: Started pod liveness-b11b374c-de9f-453a-b914-1698c7f0a647 in namespace container-probe-6590 +STEP: checking the pod's current state and verifying that restartCount is present +Aug 3 07:10:04.340: INFO: Initial restart count of pod liveness-b11b374c-de9f-453a-b914-1698c7f0a647 is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:14:04.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-6590" for this suite. + +• [SLOW TEST:246.306 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":346,"completed":141,"skipped":2487,"failed":0} +SSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should validate Statefulset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:14:04.569: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 +STEP: Creating service test in namespace statefulset-8647 +[It] should validate Statefulset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating statefulset ss in namespace statefulset-8647 +Aug 3 07:14:04.664: INFO: Found 0 stateful pods, waiting for 1 +Aug 3 07:14:14.683: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Patch Statefulset to include a label +STEP: Getting /status +Aug 3 07:14:14.713: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) +STEP: updating the StatefulSet Status +Aug 3 07:14:14.726: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the statefulset status to be updated +Aug 3 07:14:14.729: INFO: Observed &StatefulSet event: ADDED +Aug 3 07:14:14.729: INFO: Found Statefulset ss in namespace statefulset-8647 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Aug 3 07:14:14.729: INFO: Statefulset ss has an updated status +STEP: patching the Statefulset Status +Aug 3 07:14:14.729: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Aug 3 07:14:14.737: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} +STEP: watching for the Statefulset status to be patched +Aug 3 07:14:14.742: INFO: Observed &StatefulSet event: ADDED +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 +Aug 3 07:14:14.742: INFO: Deleting all statefulset in ns statefulset-8647 +Aug 3 07:14:14.747: INFO: Scaling statefulset ss to 0 +Aug 3 07:14:24.810: INFO: Waiting for statefulset status.replicas updated to 0 +Aug 3 07:14:24.821: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:14:24.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-8647" for this suite. + +• [SLOW TEST:20.314 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 + should validate Statefulset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":346,"completed":142,"skipped":2494,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:14:24.884: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Aug 3 07:14:24.961: INFO: Waiting up to 5m0s for pod "downwardapi-volume-882fca17-c221-4007-9879-887a090dc7b0" in namespace "downward-api-8020" to be "Succeeded or Failed" +Aug 3 07:14:24.966: INFO: Pod "downwardapi-volume-882fca17-c221-4007-9879-887a090dc7b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.845151ms +Aug 3 07:14:26.980: INFO: Pod "downwardapi-volume-882fca17-c221-4007-9879-887a090dc7b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018597484s +Aug 3 07:14:28.992: INFO: Pod "downwardapi-volume-882fca17-c221-4007-9879-887a090dc7b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030310329s +STEP: Saw pod success +Aug 3 07:14:28.992: INFO: Pod "downwardapi-volume-882fca17-c221-4007-9879-887a090dc7b0" satisfied condition "Succeeded or Failed" +Aug 3 07:14:28.997: INFO: Trying to get logs from node dce-10-6-213-50 pod downwardapi-volume-882fca17-c221-4007-9879-887a090dc7b0 container client-container: +STEP: delete the pod +Aug 3 07:14:29.053: INFO: Waiting for pod downwardapi-volume-882fca17-c221-4007-9879-887a090dc7b0 to disappear +Aug 3 07:14:29.061: INFO: Pod downwardapi-volume-882fca17-c221-4007-9879-887a090dc7b0 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:14:29.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-8020" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":143,"skipped":2531,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:14:29.090: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:14:29.162: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: client-side validation (kubectl create and apply) allows request with known and required properties +Aug 3 07:14:35.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-3528 --namespace=crd-publish-openapi-3528 create -f -' +Aug 3 07:14:36.190: INFO: stderr: "" +Aug 3 07:14:36.190: INFO: stdout: "e2e-test-crd-publish-openapi-6558-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Aug 3 07:14:36.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-3528 --namespace=crd-publish-openapi-3528 delete e2e-test-crd-publish-openapi-6558-crds test-foo' +Aug 3 07:14:36.307: INFO: stderr: "" +Aug 3 07:14:36.308: INFO: stdout: "e2e-test-crd-publish-openapi-6558-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +Aug 3 07:14:36.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-3528 --namespace=crd-publish-openapi-3528 apply -f -' +Aug 3 07:14:37.309: INFO: stderr: "" +Aug 3 07:14:37.309: INFO: stdout: "e2e-test-crd-publish-openapi-6558-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Aug 3 07:14:37.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-3528 --namespace=crd-publish-openapi-3528 delete e2e-test-crd-publish-openapi-6558-crds test-foo' +Aug 3 07:14:37.411: INFO: stderr: "" +Aug 3 07:14:37.411: INFO: stdout: "e2e-test-crd-publish-openapi-6558-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +STEP: client-side validation (kubectl create and apply) rejects request with value outside defined enum values +Aug 3 07:14:37.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-3528 --namespace=crd-publish-openapi-3528 create -f -' +Aug 3 07:14:38.365: INFO: rc: 1 +STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema +Aug 3 07:14:38.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-3528 --namespace=crd-publish-openapi-3528 create -f -' +Aug 3 07:14:38.611: INFO: rc: 1 +Aug 3 07:14:38.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-3528 --namespace=crd-publish-openapi-3528 apply -f -' +Aug 3 07:14:38.860: INFO: rc: 1 +STEP: client-side validation (kubectl create and apply) rejects request without required properties +Aug 3 07:14:38.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-3528 --namespace=crd-publish-openapi-3528 create -f -' +Aug 3 07:14:39.110: INFO: rc: 1 +Aug 3 07:14:39.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-3528 --namespace=crd-publish-openapi-3528 apply -f -' +Aug 3 07:14:39.373: INFO: rc: 1 +STEP: kubectl explain works to explain CR properties +Aug 3 07:14:39.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-3528 explain e2e-test-crd-publish-openapi-6558-crds' +Aug 3 07:14:39.622: INFO: stderr: "" +Aug 3 07:14:39.622: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-6558-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" +STEP: kubectl explain works to explain CR properties recursively +Aug 3 07:14:39.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-3528 explain e2e-test-crd-publish-openapi-6558-crds.metadata' +Aug 3 07:14:39.945: INFO: stderr: "" +Aug 3 07:14:39.946: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-6558-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" +Aug 3 07:14:39.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-3528 explain e2e-test-crd-publish-openapi-6558-crds.spec' +Aug 3 07:14:40.180: INFO: stderr: "" +Aug 3 07:14:40.180: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-6558-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" +Aug 3 07:14:40.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-3528 explain e2e-test-crd-publish-openapi-6558-crds.spec.bars' +Aug 3 07:14:40.405: INFO: stderr: "" +Aug 3 07:14:40.405: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-6558-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t\n Whether Bar is feeling great.\n\n name\t -required-\n Name of Bar.\n\n" +STEP: kubectl explain works to return error when explain is called on property that doesn't exist +Aug 3 07:14:40.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-3528 explain e2e-test-crd-publish-openapi-6558-crds.spec.bars2' +Aug 3 07:14:40.646: INFO: rc: 1 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:14:44.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-3528" for this suite. + +• [SLOW TEST:15.307 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":346,"completed":144,"skipped":2536,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + should validate Deployment Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:14:44.398: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] should validate Deployment Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Deployment +Aug 3 07:14:44.463: INFO: Creating simple deployment test-deployment-pn5d5 +Aug 3 07:14:44.487: INFO: new replicaset for deployment "test-deployment-pn5d5" is yet to be created +Aug 3 07:14:46.529: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 14, 44, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 14, 44, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 14, 44, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 14, 44, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-pn5d5-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Getting /status +Aug 3 07:14:48.558: INFO: Deployment test-deployment-pn5d5 has Conditions: [{Available True 2022-08-03 07:14:47 +0000 UTC 2022-08-03 07:14:47 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2022-08-03 07:14:47 +0000 UTC 2022-08-03 07:14:44 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-pn5d5-764bc7c4b7" has successfully progressed.}] +STEP: updating Deployment Status +Aug 3 07:14:48.580: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 14, 47, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 14, 47, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 14, 47, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 14, 44, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-pn5d5-764bc7c4b7\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Deployment status to be updated +Aug 3 07:14:48.584: INFO: Observed &Deployment event: ADDED +Aug 3 07:14:48.584: INFO: Observed Deployment test-deployment-pn5d5 in namespace deployment-8111 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-08-03 07:14:44 +0000 UTC 2022-08-03 07:14:44 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-pn5d5-764bc7c4b7"} +Aug 3 07:14:48.584: INFO: Observed &Deployment event: MODIFIED +Aug 3 07:14:48.584: INFO: Observed Deployment test-deployment-pn5d5 in namespace deployment-8111 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-08-03 07:14:44 +0000 UTC 2022-08-03 07:14:44 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-pn5d5-764bc7c4b7"} +Aug 3 07:14:48.584: INFO: Observed Deployment test-deployment-pn5d5 in namespace deployment-8111 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-08-03 07:14:44 +0000 UTC 2022-08-03 07:14:44 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Aug 3 07:14:48.584: INFO: Observed &Deployment event: MODIFIED +Aug 3 07:14:48.584: INFO: Observed Deployment test-deployment-pn5d5 in namespace deployment-8111 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-08-03 07:14:44 +0000 UTC 2022-08-03 07:14:44 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Aug 3 07:14:48.585: INFO: Observed Deployment test-deployment-pn5d5 in namespace deployment-8111 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-08-03 07:14:44 +0000 UTC 2022-08-03 07:14:44 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-pn5d5-764bc7c4b7" is progressing.} +Aug 3 07:14:48.585: INFO: Observed &Deployment event: MODIFIED +Aug 3 07:14:48.585: INFO: Observed Deployment test-deployment-pn5d5 in namespace deployment-8111 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-08-03 07:14:47 +0000 UTC 2022-08-03 07:14:47 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Aug 3 07:14:48.585: INFO: Observed Deployment test-deployment-pn5d5 in namespace deployment-8111 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-08-03 07:14:47 +0000 UTC 2022-08-03 07:14:44 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-pn5d5-764bc7c4b7" has successfully progressed.} +Aug 3 07:14:48.586: INFO: Observed &Deployment event: MODIFIED +Aug 3 07:14:48.586: INFO: Observed Deployment test-deployment-pn5d5 in namespace deployment-8111 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-08-03 07:14:47 +0000 UTC 2022-08-03 07:14:47 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Aug 3 07:14:48.586: INFO: Observed Deployment test-deployment-pn5d5 in namespace deployment-8111 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-08-03 07:14:47 +0000 UTC 2022-08-03 07:14:44 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-pn5d5-764bc7c4b7" has successfully progressed.} +Aug 3 07:14:48.586: INFO: Found Deployment test-deployment-pn5d5 in namespace deployment-8111 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Aug 3 07:14:48.586: INFO: Deployment test-deployment-pn5d5 has an updated status +STEP: patching the Statefulset Status +Aug 3 07:14:48.586: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Aug 3 07:14:48.596: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} +STEP: watching for the Deployment status to be patched +Aug 3 07:14:48.601: INFO: Observed &Deployment event: ADDED +Aug 3 07:14:48.601: INFO: Observed deployment test-deployment-pn5d5 in namespace deployment-8111 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-08-03 07:14:44 +0000 UTC 2022-08-03 07:14:44 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-pn5d5-764bc7c4b7"} +Aug 3 07:14:48.601: INFO: Observed &Deployment event: MODIFIED +Aug 3 07:14:48.601: INFO: Observed deployment test-deployment-pn5d5 in namespace deployment-8111 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-08-03 07:14:44 +0000 UTC 2022-08-03 07:14:44 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-pn5d5-764bc7c4b7"} +Aug 3 07:14:48.601: INFO: Observed deployment test-deployment-pn5d5 in namespace deployment-8111 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-08-03 07:14:44 +0000 UTC 2022-08-03 07:14:44 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Aug 3 07:14:48.601: INFO: Observed &Deployment event: MODIFIED +Aug 3 07:14:48.601: INFO: Observed deployment test-deployment-pn5d5 in namespace deployment-8111 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-08-03 07:14:44 +0000 UTC 2022-08-03 07:14:44 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Aug 3 07:14:48.601: INFO: Observed deployment test-deployment-pn5d5 in namespace deployment-8111 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-08-03 07:14:44 +0000 UTC 2022-08-03 07:14:44 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-pn5d5-764bc7c4b7" is progressing.} +Aug 3 07:14:48.601: INFO: Observed &Deployment event: MODIFIED +Aug 3 07:14:48.601: INFO: Observed deployment test-deployment-pn5d5 in namespace deployment-8111 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-08-03 07:14:47 +0000 UTC 2022-08-03 07:14:47 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Aug 3 07:14:48.601: INFO: Observed deployment test-deployment-pn5d5 in namespace deployment-8111 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-08-03 07:14:47 +0000 UTC 2022-08-03 07:14:44 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-pn5d5-764bc7c4b7" has successfully progressed.} +Aug 3 07:14:48.602: INFO: Observed &Deployment event: MODIFIED +Aug 3 07:14:48.602: INFO: Observed deployment test-deployment-pn5d5 in namespace deployment-8111 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-08-03 07:14:47 +0000 UTC 2022-08-03 07:14:47 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Aug 3 07:14:48.602: INFO: Observed deployment test-deployment-pn5d5 in namespace deployment-8111 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-08-03 07:14:47 +0000 UTC 2022-08-03 07:14:44 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-pn5d5-764bc7c4b7" has successfully progressed.} +Aug 3 07:14:48.602: INFO: Observed deployment test-deployment-pn5d5 in namespace deployment-8111 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Aug 3 07:14:48.602: INFO: Observed &Deployment event: MODIFIED +Aug 3 07:14:48.602: INFO: Found deployment test-deployment-pn5d5 in namespace deployment-8111 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } +Aug 3 07:14:48.602: INFO: Deployment test-deployment-pn5d5 has a patched status +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Aug 3 07:14:48.612: INFO: Deployment "test-deployment-pn5d5": +&Deployment{ObjectMeta:{test-deployment-pn5d5 deployment-8111 369fd944-f428-48d3-91c4-1b597000a903 620063 1 2022-08-03 07:14:44 +0000 UTC map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00484ce98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:FoundNewReplicaSet,Message:Found new replica set "test-deployment-pn5d5-764bc7c4b7",LastUpdateTime:2022-08-03 07:14:48 +0000 UTC,LastTransitionTime:2022-08-03 07:14:48 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Aug 3 07:14:48.620: INFO: New ReplicaSet "test-deployment-pn5d5-764bc7c4b7" of Deployment "test-deployment-pn5d5": +&ReplicaSet{ObjectMeta:{test-deployment-pn5d5-764bc7c4b7 deployment-8111 8559e69f-67dc-464c-b7ad-74b6382a7bed 620056 1 2022-08-03 07:14:44 +0000 UTC map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-pn5d5 369fd944-f428-48d3-91c4-1b597000a903 0xc00484d2c0 0xc00484d2c1}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 764bc7c4b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00484d338 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Aug 3 07:14:48.626: INFO: Pod "test-deployment-pn5d5-764bc7c4b7-fk2xh" is available: +&Pod{ObjectMeta:{test-deployment-pn5d5-764bc7c4b7-fk2xh test-deployment-pn5d5-764bc7c4b7- deployment-8111 d0f527c3-585e-44e1-be32-4d1b44a672e3 620055 0 2022-08-03 07:14:44 +0000 UTC map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[cni.projectcalico.org/ipv4pools:["default-ipv4-ippool"] dce.daocloud.io/parcel.egress.burst:0 dce.daocloud.io/parcel.egress.rate:0 dce.daocloud.io/parcel.ingress.burst:0 dce.daocloud.io/parcel.ingress.rate:0] [{apps/v1 ReplicaSet test-deployment-pn5d5-764bc7c4b7 8559e69f-67dc-464c-b7ad-74b6382a7bed 0xc00484d800 0xc00484d801}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vd4wn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vd4wn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:14:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:14:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:14:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:14:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.50,PodIP:172.29.175.46,StartTime:2022-08-03 07:14:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-03 07:14:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:docker://c616c777dbba22af4d1ad812ec67b6c5c6a258877afe4977fb2c8df832ca3b21,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.29.175.46,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:14:48.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-8111" for this suite. +•{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":346,"completed":145,"skipped":2560,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Aggregator + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:14:48.648: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename aggregator +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 +Aug 3 07:14:48.763: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the sample API server. +Aug 3 07:14:49.496: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set +Aug 3 07:14:51.605: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 14, 49, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 14, 49, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 14, 49, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 14, 49, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7b4b967944\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 07:15:02.856: INFO: Waited 9.217779843s for the sample-apiserver to be ready to handle requests. +STEP: Read Status for v1alpha1.wardle.example.com +STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' +STEP: List APIServices +Aug 3 07:15:03.055: INFO: Found v1alpha1.wardle.example.com in APIServiceList +[AfterEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 +[AfterEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:15:03.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "aggregator-6976" for this suite. + +• [SLOW TEST:14.945 seconds] +[sig-api-machinery] Aggregator +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":346,"completed":146,"skipped":2579,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Ingress API + should support creating Ingress API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Ingress API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:15:03.593: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename ingress +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support creating Ingress API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/networking.k8s.io +STEP: getting /apis/networking.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Aug 3 07:15:03.688: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Aug 3 07:15:03.695: INFO: starting watch +STEP: patching +STEP: updating +Aug 3 07:15:03.720: INFO: waiting for watch events with expected annotations +Aug 3 07:15:03.720: INFO: saw patched and updated annotations +STEP: patching /status +STEP: updating /status +STEP: get /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] Ingress API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:15:03.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ingress-2722" for this suite. +•{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":346,"completed":147,"skipped":2599,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:15:03.823: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename sysctl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 +[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod with the kernel.shm_rmid_forced sysctl +STEP: Watching for error events or started pod +STEP: Waiting for pod completion +STEP: Checking that the pod succeeded +STEP: Getting logs from the pod +STEP: Checking that the sysctl is actually updated +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:15:07.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sysctl-2065" for this suite. +•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":148,"skipped":2609,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch + watch on custom resource definition objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:15:08.003: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename crd-watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] watch on custom resource definition objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:15:08.063: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Creating first CR +Aug 3 07:15:10.666: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-08-03T07:15:10Z generation:1 name:name1 resourceVersion:620331 uid:4d4daf4d-eb40-4140-ae6b-a6154f3a7908] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Creating second CR +Aug 3 07:15:20.698: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-08-03T07:15:20Z generation:1 name:name2 resourceVersion:620380 uid:da0f8bb3-fe69-4324-9174-b711b08279da] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying first CR +Aug 3 07:15:30.732: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-08-03T07:15:10Z generation:2 name:name1 resourceVersion:620411 uid:4d4daf4d-eb40-4140-ae6b-a6154f3a7908] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying second CR +Aug 3 07:15:40.755: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-08-03T07:15:20Z generation:2 name:name2 resourceVersion:620444 uid:da0f8bb3-fe69-4324-9174-b711b08279da] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting first CR +Aug 3 07:15:50.781: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-08-03T07:15:10Z generation:2 name:name1 resourceVersion:620472 uid:4d4daf4d-eb40-4140-ae6b-a6154f3a7908] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting second CR +Aug 3 07:16:00.807: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-08-03T07:15:20Z generation:2 name:name2 resourceVersion:620505 uid:da0f8bb3-fe69-4324-9174-b711b08279da] num:map[num1:9223372036854775807 num2:1000000]]} +[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:16:11.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-watch-2808" for this suite. + +• [SLOW TEST:63.363 seconds] +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + CustomResourceDefinition Watch + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 + watch on custom resource definition objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":346,"completed":149,"skipped":2690,"failed":0} +SS +------------------------------ +[sig-network] DNS + should provide DNS for the cluster [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:16:11.365: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for the cluster [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Aug 3 07:16:15.496: INFO: DNS probes using dns-8038/dns-test-e930060f-6149-46d8-bf7f-7fac4f222d35 succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:16:15.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-8038" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":346,"completed":150,"skipped":2692,"failed":0} +SS +------------------------------ +[sig-instrumentation] Events API + should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:16:15.542: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename events +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 +[It] should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of events +STEP: get a list of Events with a label in the current namespace +STEP: delete a list of events +Aug 3 07:16:15.645: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity +[AfterEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:16:15.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-9079" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":346,"completed":151,"skipped":2694,"failed":0} +S +------------------------------ +[sig-network] Services + should find a service from listing all namespaces [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:16:15.705: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should find a service from listing all namespaces [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: fetching services +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:16:15.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-7979" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":346,"completed":152,"skipped":2695,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-network] Services + should provide secure master service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:16:15.807: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should provide secure master service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:16:15.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-3867" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":346,"completed":153,"skipped":2705,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command in a pod + should print the output to logs [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:16:15.896: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should print the output to logs [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:16:15.963: INFO: The status of Pod busybox-scheduling-8b2e5c29-1a81-4eeb-b7f6-b741cf76b5fa is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:16:17.979: INFO: The status of Pod busybox-scheduling-8b2e5c29-1a81-4eeb-b7f6-b741cf76b5fa is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:16:19.975: INFO: The status of Pod busybox-scheduling-8b2e5c29-1a81-4eeb-b7f6-b741cf76b5fa is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:16:21.980: INFO: The status of Pod busybox-scheduling-8b2e5c29-1a81-4eeb-b7f6-b741cf76b5fa is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:16:21.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-9571" for this suite. + +• [SLOW TEST:6.120 seconds] +[sig-node] Kubelet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + when scheduling a busybox command in a pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:41 + should print the output to logs [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":346,"completed":154,"skipped":2723,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:16:22.017: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 3 07:16:22.440: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Aug 3 07:16:24.466: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 16, 22, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 16, 22, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 16, 22, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 16, 22, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 07:16:26.479: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 16, 22, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 16, 22, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 16, 22, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 16, 22, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 3 07:16:29.498: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a mutating webhook configuration +STEP: Updating a mutating webhook configuration's rules to not include the create operation +STEP: Creating a configMap that should not be mutated +STEP: Patching a mutating webhook configuration's rules to include the create operation +STEP: Creating a configMap that should be mutated +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:16:29.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-8290" for this suite. +STEP: Destroying namespace "webhook-8290-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:7.732 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":346,"completed":155,"skipped":2802,"failed":0} +S +------------------------------ +[sig-apps] Deployment + should run the lifecycle of a Deployment [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:16:29.749: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] should run the lifecycle of a Deployment [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Deployment +STEP: waiting for Deployment to be created +STEP: waiting for all Replicas to be Ready +Aug 3 07:16:29.873: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Aug 3 07:16:29.873: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Aug 3 07:16:29.885: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Aug 3 07:16:29.885: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Aug 3 07:16:29.911: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Aug 3 07:16:29.915: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Aug 3 07:16:29.928: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Aug 3 07:16:29.928: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Aug 3 07:16:34.127: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Aug 3 07:16:34.127: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Aug 3 07:16:34.310: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 2 and labels map[test-deployment-static:true] +STEP: patching the Deployment +Aug 3 07:16:34.332: INFO: observed event type ADDED +STEP: waiting for Replicas to scale +Aug 3 07:16:34.337: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 0 +Aug 3 07:16:34.337: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 0 +Aug 3 07:16:34.337: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 0 +Aug 3 07:16:34.337: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 0 +Aug 3 07:16:34.337: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 0 +Aug 3 07:16:34.337: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 0 +Aug 3 07:16:34.337: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 0 +Aug 3 07:16:34.337: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 0 +Aug 3 07:16:34.337: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 1 +Aug 3 07:16:34.337: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 1 +Aug 3 07:16:34.337: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 2 +Aug 3 07:16:34.337: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 2 +Aug 3 07:16:34.337: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 2 +Aug 3 07:16:34.337: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 2 +Aug 3 07:16:34.344: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 2 +Aug 3 07:16:34.344: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 2 +Aug 3 07:16:34.360: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 2 +Aug 3 07:16:34.360: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 2 +Aug 3 07:16:34.378: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 1 +Aug 3 07:16:34.378: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 1 +Aug 3 07:16:34.422: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 1 +Aug 3 07:16:34.422: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 1 +Aug 3 07:16:39.371: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 2 +Aug 3 07:16:39.372: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 2 +Aug 3 07:16:39.393: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 1 +STEP: listing Deployments +Aug 3 07:16:39.421: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] +STEP: updating the Deployment +Aug 3 07:16:39.437: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 1 +STEP: fetching the DeploymentStatus +Aug 3 07:16:39.458: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Aug 3 07:16:39.458: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Aug 3 07:16:39.480: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Aug 3 07:16:39.495: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Aug 3 07:16:39.512: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Aug 3 07:16:43.322: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Aug 3 07:16:43.588: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +Aug 3 07:16:43.664: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Aug 3 07:16:43.675: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Aug 3 07:16:46.588: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +STEP: patching the DeploymentStatus +STEP: fetching the DeploymentStatus +Aug 3 07:16:46.656: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 1 +Aug 3 07:16:46.656: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 1 +Aug 3 07:16:46.656: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 1 +Aug 3 07:16:46.656: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 1 +Aug 3 07:16:46.656: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 1 +Aug 3 07:16:46.656: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 2 +Aug 3 07:16:46.656: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 3 +Aug 3 07:16:46.656: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 2 +Aug 3 07:16:46.656: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 2 +Aug 3 07:16:46.656: INFO: observed Deployment test-deployment in namespace deployment-1147 with ReadyReplicas 3 +STEP: deleting the Deployment +Aug 3 07:16:46.680: INFO: observed event type MODIFIED +Aug 3 07:16:46.680: INFO: observed event type MODIFIED +Aug 3 07:16:46.680: INFO: observed event type MODIFIED +Aug 3 07:16:46.681: INFO: observed event type MODIFIED +Aug 3 07:16:46.681: INFO: observed event type MODIFIED +Aug 3 07:16:46.681: INFO: observed event type MODIFIED +Aug 3 07:16:46.681: INFO: observed event type MODIFIED +Aug 3 07:16:46.681: INFO: observed event type MODIFIED +Aug 3 07:16:46.681: INFO: observed event type MODIFIED +Aug 3 07:16:46.681: INFO: observed event type MODIFIED +Aug 3 07:16:46.681: INFO: observed event type MODIFIED +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Aug 3 07:16:46.692: INFO: Log out all the ReplicaSets if there is no deployment created +Aug 3 07:16:46.700: INFO: ReplicaSet "test-deployment-5ddd8b47d8": +&ReplicaSet{ObjectMeta:{test-deployment-5ddd8b47d8 deployment-1147 8df4210e-7bfa-4dc1-9c1d-3edba2ce72ca 621070 4 2022-08-03 07:16:34 +0000 UTC map[pod-template-hash:5ddd8b47d8 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment ca460a38-ada5-44a5-a2a3-edfbb1b6f2f0 0xc0050a2637 0xc0050a2638}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 5ddd8b47d8,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:5ddd8b47d8 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.6 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0050a2688 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + +Aug 3 07:16:46.711: INFO: pod: "test-deployment-5ddd8b47d8-94xkt": +&Pod{ObjectMeta:{test-deployment-5ddd8b47d8-94xkt test-deployment-5ddd8b47d8- deployment-1147 df76579e-35fd-469c-b49b-b3fe64b4dec6 621066 0 2022-08-03 07:16:39 +0000 UTC 2022-08-03 07:16:47 +0000 UTC 0xc00513f898 map[pod-template-hash:5ddd8b47d8 test-deployment-static:true] map[cni.projectcalico.org/ipv4pools:["default-ipv4-ippool"] dce.daocloud.io/parcel.egress.burst:0 dce.daocloud.io/parcel.egress.rate:0 dce.daocloud.io/parcel.ingress.burst:0 dce.daocloud.io/parcel.ingress.rate:0] [{apps/v1 ReplicaSet test-deployment-5ddd8b47d8 8df4210e-7bfa-4dc1-9c1d-3edba2ce72ca 0xc00513f8c7 0xc00513f8c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sqcng,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/pause:3.6,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sqcng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-40,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:16:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:16:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:16:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:16:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.40,PodIP:172.29.31.95,StartTime:2022-08-03 07:16:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-03 07:16:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/pause:3.6,ImageID:docker-pullable://k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,ContainerID:docker://7c42a90a7e353d699754822332ea47ebc466f559a779548966f35d75ad2a8fc7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.29.31.95,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Aug 3 07:16:46.712: INFO: ReplicaSet "test-deployment-6cdc5bc678": +&ReplicaSet{ObjectMeta:{test-deployment-6cdc5bc678 deployment-1147 0d3baefb-22bd-4e1b-ba2a-dab41d37112c 620954 3 2022-08-03 07:16:29 +0000 UTC map[pod-template-hash:6cdc5bc678 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment ca460a38-ada5-44a5-a2a3-edfbb1b6f2f0 0xc0050a26e7 0xc0050a26e8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 6cdc5bc678,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:6cdc5bc678 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0050a2738 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + +Aug 3 07:16:46.730: INFO: ReplicaSet "test-deployment-854fdc678": +&ReplicaSet{ObjectMeta:{test-deployment-854fdc678 deployment-1147 257cff46-3824-4277-847f-813208f1fa61 621062 2 2022-08-03 07:16:39 +0000 UTC map[pod-template-hash:854fdc678 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment ca460a38-ada5-44a5-a2a3-edfbb1b6f2f0 0xc0050a2797 0xc0050a2798}] [] []},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 854fdc678,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:854fdc678 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0050a27e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} + +Aug 3 07:16:46.742: INFO: pod: "test-deployment-854fdc678-hcg87": +&Pod{ObjectMeta:{test-deployment-854fdc678-hcg87 test-deployment-854fdc678- deployment-1147 366326fe-3159-4903-8eb5-1de7d10706f1 621061 0 2022-08-03 07:16:43 +0000 UTC map[pod-template-hash:854fdc678 test-deployment-static:true] map[cni.projectcalico.org/ipv4pools:["default-ipv4-ippool"] dce.daocloud.io/parcel.egress.burst:0 dce.daocloud.io/parcel.egress.rate:0 dce.daocloud.io/parcel.ingress.burst:0 dce.daocloud.io/parcel.ingress.rate:0] [{apps/v1 ReplicaSet test-deployment-854fdc678 257cff46-3824-4277-847f-813208f1fa61 0xc0050a2d57 0xc0050a2d58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5f5kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5f5kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-40,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:16:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:16:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:16:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:16:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.40,PodIP:172.29.31.121,StartTime:2022-08-03 07:16:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-03 07:16:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:docker://624b3533a56cd7d0f4770a9321ca49366e8d31f46c94624fb95016f0ca9fcd06,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.29.31.121,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Aug 3 07:16:46.743: INFO: pod: "test-deployment-854fdc678-hzbmn": +&Pod{ObjectMeta:{test-deployment-854fdc678-hzbmn test-deployment-854fdc678- deployment-1147 80eb39d3-ffde-4770-8a74-bfd2dd894b79 621017 0 2022-08-03 07:16:39 +0000 UTC map[pod-template-hash:854fdc678 test-deployment-static:true] map[cni.projectcalico.org/ipv4pools:["default-ipv4-ippool"] dce.daocloud.io/parcel.egress.burst:0 dce.daocloud.io/parcel.egress.rate:0 dce.daocloud.io/parcel.ingress.burst:0 dce.daocloud.io/parcel.ingress.rate:0] [{apps/v1 ReplicaSet test-deployment-854fdc678 257cff46-3824-4277-847f-813208f1fa61 0xc0050a2f27 0xc0050a2f28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8hvqb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8hvqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:16:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:16:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:16:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:16:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.50,PodIP:172.29.175.2,StartTime:2022-08-03 07:16:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-03 07:16:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:docker://3f4dc363831b6b6f2147705edd2fde4aba0ae812cec6cb4e2582016640ea1037,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.29.175.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:16:46.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-1147" for this suite. + +• [SLOW TEST:17.045 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should run the lifecycle of a Deployment [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":346,"completed":156,"skipped":2803,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should receive events on concurrent watches in same order [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:16:46.795: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should receive events on concurrent watches in same order [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting a starting resourceVersion +STEP: starting a background goroutine to produce watch events +STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:16:49.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-7935" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":346,"completed":157,"skipped":2810,"failed":0} +SS +------------------------------ +[sig-network] Proxy version v1 + should proxy through a service and a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:16:49.663: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename proxy +STEP: Waiting for a default service account to be provisioned in namespace +[It] should proxy through a service and a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: starting an echo server on multiple ports +STEP: creating replication controller proxy-service-xtqv8 in namespace proxy-5131 +I0803 07:16:49.818983 21 runners.go:193] Created replication controller with name: proxy-service-xtqv8, namespace: proxy-5131, replica count: 1 +I0803 07:16:50.870483 21 runners.go:193] proxy-service-xtqv8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0803 07:16:51.870888 21 runners.go:193] proxy-service-xtqv8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0803 07:16:52.871917 21 runners.go:193] proxy-service-xtqv8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0803 07:16:53.872538 21 runners.go:193] proxy-service-xtqv8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady +I0803 07:16:54.873705 21 runners.go:193] proxy-service-xtqv8 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 3 07:16:54.881: INFO: setup took 5.130145071s, starting test cases +STEP: running 16 cases, 20 attempts per case, 320 total attempts +Aug 3 07:16:54.905: INFO: (0) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 23.257175ms) +Aug 3 07:16:54.905: INFO: (0) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:1080/proxy/: test<... (200; 22.933381ms) +Aug 3 07:16:54.905: INFO: (0) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 23.099221ms) +Aug 3 07:16:54.905: INFO: (0) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 22.97789ms) +Aug 3 07:16:54.905: INFO: (0) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2/proxy/: test (200; 23.062106ms) +Aug 3 07:16:54.905: INFO: (0) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:1080/proxy/: ... (200; 23.226389ms) +Aug 3 07:16:54.905: INFO: (0) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname2/proxy/: bar (200; 23.170506ms) +Aug 3 07:16:54.905: INFO: (0) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 23.197358ms) +Aug 3 07:16:54.905: INFO: (0) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:462/proxy/: tls qux (200; 23.429454ms) +Aug 3 07:16:54.905: INFO: (0) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:460/proxy/: tls baz (200; 23.183079ms) +Aug 3 07:16:54.908: INFO: (0) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:443/proxy/: test<... (200; 14.247076ms) +Aug 3 07:16:54.924: INFO: (1) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2/proxy/: test (200; 14.311478ms) +Aug 3 07:16:54.924: INFO: (1) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:1080/proxy/: ... (200; 14.262191ms) +Aug 3 07:16:54.924: INFO: (1) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 14.393011ms) +Aug 3 07:16:54.924: INFO: (1) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:462/proxy/: tls qux (200; 14.575648ms) +Aug 3 07:16:54.924: INFO: (1) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 14.447616ms) +Aug 3 07:16:54.924: INFO: (1) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:460/proxy/: tls baz (200; 14.447136ms) +Aug 3 07:16:54.933: INFO: (1) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname2/proxy/: tls qux (200; 22.967728ms) +Aug 3 07:16:54.938: INFO: (1) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname2/proxy/: bar (200; 28.172733ms) +Aug 3 07:16:54.938: INFO: (1) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname2/proxy/: bar (200; 28.178989ms) +Aug 3 07:16:54.938: INFO: (1) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname1/proxy/: foo (200; 28.485115ms) +Aug 3 07:16:54.938: INFO: (1) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname1/proxy/: tls baz (200; 28.224204ms) +Aug 3 07:16:54.938: INFO: (1) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname1/proxy/: foo (200; 28.601783ms) +Aug 3 07:16:54.953: INFO: (2) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname1/proxy/: foo (200; 14.48241ms) +Aug 3 07:16:54.953: INFO: (2) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:460/proxy/: tls baz (200; 12.618281ms) +Aug 3 07:16:54.953: INFO: (2) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 13.400924ms) +Aug 3 07:16:54.953: INFO: (2) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:1080/proxy/: ... (200; 14.297232ms) +Aug 3 07:16:54.953: INFO: (2) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:462/proxy/: tls qux (200; 14.075772ms) +Aug 3 07:16:54.953: INFO: (2) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:443/proxy/: test<... (200; 12.831709ms) +Aug 3 07:16:54.953: INFO: (2) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 13.66116ms) +Aug 3 07:16:54.953: INFO: (2) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 13.234714ms) +Aug 3 07:16:54.953: INFO: (2) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2/proxy/: test (200; 12.830927ms) +Aug 3 07:16:54.955: INFO: (2) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname1/proxy/: foo (200; 15.596337ms) +Aug 3 07:16:54.971: INFO: (2) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname2/proxy/: bar (200; 31.204559ms) +Aug 3 07:16:54.971: INFO: (2) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname1/proxy/: tls baz (200; 31.82409ms) +Aug 3 07:16:54.971: INFO: (2) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname2/proxy/: bar (200; 30.976481ms) +Aug 3 07:16:54.977: INFO: (2) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname2/proxy/: tls qux (200; 38.476305ms) +Aug 3 07:16:54.988: INFO: (3) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 10.197511ms) +Aug 3 07:16:54.990: INFO: (3) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2/proxy/: test (200; 12.149457ms) +Aug 3 07:16:54.990: INFO: (3) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:443/proxy/: ... (200; 12.145607ms) +Aug 3 07:16:54.990: INFO: (3) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:1080/proxy/: test<... (200; 12.230784ms) +Aug 3 07:16:54.991: INFO: (3) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:462/proxy/: tls qux (200; 12.187473ms) +Aug 3 07:16:54.993: INFO: (3) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname2/proxy/: bar (200; 15.235541ms) +Aug 3 07:16:54.993: INFO: (3) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname2/proxy/: bar (200; 15.209196ms) +Aug 3 07:16:54.993: INFO: (3) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname1/proxy/: tls baz (200; 15.66136ms) +Aug 3 07:16:54.994: INFO: (3) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname1/proxy/: foo (200; 16.067878ms) +Aug 3 07:16:54.994: INFO: (3) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname1/proxy/: foo (200; 16.165889ms) +Aug 3 07:16:54.994: INFO: (3) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname2/proxy/: tls qux (200; 16.801208ms) +Aug 3 07:16:55.005: INFO: (4) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 10.601359ms) +Aug 3 07:16:55.005: INFO: (4) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 10.528928ms) +Aug 3 07:16:55.005: INFO: (4) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:1080/proxy/: ... (200; 10.906283ms) +Aug 3 07:16:55.011: INFO: (4) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 15.160003ms) +Aug 3 07:16:55.011: INFO: (4) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:1080/proxy/: test<... (200; 15.529489ms) +Aug 3 07:16:55.011: INFO: (4) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 16.036998ms) +Aug 3 07:16:55.012: INFO: (4) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:443/proxy/: test (200; 16.997936ms) +Aug 3 07:16:55.012: INFO: (4) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:460/proxy/: tls baz (200; 16.684106ms) +Aug 3 07:16:55.012: INFO: (4) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname2/proxy/: bar (200; 17.497287ms) +Aug 3 07:16:55.012: INFO: (4) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:462/proxy/: tls qux (200; 16.478341ms) +Aug 3 07:16:55.014: INFO: (4) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname2/proxy/: bar (200; 18.744717ms) +Aug 3 07:16:55.017: INFO: (4) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname2/proxy/: tls qux (200; 21.398872ms) +Aug 3 07:16:55.021: INFO: (4) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname1/proxy/: foo (200; 25.7585ms) +Aug 3 07:16:55.021: INFO: (4) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname1/proxy/: foo (200; 25.704752ms) +Aug 3 07:16:55.021: INFO: (4) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname1/proxy/: tls baz (200; 26.572793ms) +Aug 3 07:16:55.035: INFO: (5) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 13.010822ms) +Aug 3 07:16:55.035: INFO: (5) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:1080/proxy/: ... (200; 13.183689ms) +Aug 3 07:16:55.035: INFO: (5) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 13.529388ms) +Aug 3 07:16:55.035: INFO: (5) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 13.545182ms) +Aug 3 07:16:55.035: INFO: (5) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:1080/proxy/: test<... (200; 13.715956ms) +Aug 3 07:16:55.035: INFO: (5) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:443/proxy/: test (200; 14.013899ms) +Aug 3 07:16:55.040: INFO: (5) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname2/proxy/: bar (200; 18.084079ms) +Aug 3 07:16:55.040: INFO: (5) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname2/proxy/: tls qux (200; 18.734897ms) +Aug 3 07:16:55.042: INFO: (5) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname1/proxy/: foo (200; 20.313212ms) +Aug 3 07:16:55.042: INFO: (5) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname1/proxy/: foo (200; 20.419454ms) +Aug 3 07:16:55.042: INFO: (5) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname1/proxy/: tls baz (200; 20.331468ms) +Aug 3 07:16:55.042: INFO: (5) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname2/proxy/: bar (200; 20.35249ms) +Aug 3 07:16:55.053: INFO: (6) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 10.665222ms) +Aug 3 07:16:55.053: INFO: (6) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 10.441166ms) +Aug 3 07:16:55.055: INFO: (6) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2/proxy/: test (200; 11.730087ms) +Aug 3 07:16:55.055: INFO: (6) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 12.054975ms) +Aug 3 07:16:55.055: INFO: (6) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 12.344271ms) +Aug 3 07:16:55.056: INFO: (6) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname2/proxy/: bar (200; 12.695775ms) +Aug 3 07:16:55.056: INFO: (6) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:443/proxy/: test<... (200; 12.250489ms) +Aug 3 07:16:55.056: INFO: (6) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:1080/proxy/: ... (200; 11.598276ms) +Aug 3 07:16:55.056: INFO: (6) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname1/proxy/: tls baz (200; 12.710657ms) +Aug 3 07:16:55.056: INFO: (6) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:460/proxy/: tls baz (200; 12.098657ms) +Aug 3 07:16:55.056: INFO: (6) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:462/proxy/: tls qux (200; 11.824332ms) +Aug 3 07:16:55.062: INFO: (6) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname2/proxy/: bar (200; 18.0615ms) +Aug 3 07:16:55.062: INFO: (6) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname1/proxy/: foo (200; 18.267193ms) +Aug 3 07:16:55.062: INFO: (6) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname2/proxy/: tls qux (200; 17.973255ms) +Aug 3 07:16:55.062: INFO: (6) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname1/proxy/: foo (200; 18.246495ms) +Aug 3 07:16:55.074: INFO: (7) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname2/proxy/: bar (200; 10.625895ms) +Aug 3 07:16:55.074: INFO: (7) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 11.083134ms) +Aug 3 07:16:55.074: INFO: (7) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:1080/proxy/: ... (200; 11.619013ms) +Aug 3 07:16:55.074: INFO: (7) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:443/proxy/: test<... (200; 11.017999ms) +Aug 3 07:16:55.074: INFO: (7) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 10.811694ms) +Aug 3 07:16:55.074: INFO: (7) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 10.881953ms) +Aug 3 07:16:55.074: INFO: (7) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2/proxy/: test (200; 12.040959ms) +Aug 3 07:16:55.074: INFO: (7) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:462/proxy/: tls qux (200; 10.858542ms) +Aug 3 07:16:55.074: INFO: (7) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 10.883377ms) +Aug 3 07:16:55.075: INFO: (7) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname1/proxy/: foo (200; 12.400894ms) +Aug 3 07:16:55.079: INFO: (7) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname1/proxy/: tls baz (200; 16.020587ms) +Aug 3 07:16:55.080: INFO: (7) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname2/proxy/: tls qux (200; 18.19691ms) +Aug 3 07:16:55.082: INFO: (7) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname1/proxy/: foo (200; 18.814132ms) +Aug 3 07:16:55.084: INFO: (7) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname2/proxy/: bar (200; 20.827603ms) +Aug 3 07:16:55.094: INFO: (8) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:460/proxy/: tls baz (200; 9.216985ms) +Aug 3 07:16:55.097: INFO: (8) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 12.517438ms) +Aug 3 07:16:55.097: INFO: (8) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 12.005991ms) +Aug 3 07:16:55.097: INFO: (8) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:462/proxy/: tls qux (200; 12.092634ms) +Aug 3 07:16:55.097: INFO: (8) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:443/proxy/: test (200; 12.065897ms) +Aug 3 07:16:55.097: INFO: (8) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:1080/proxy/: test<... (200; 12.266789ms) +Aug 3 07:16:55.097: INFO: (8) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:1080/proxy/: ... (200; 12.08014ms) +Aug 3 07:16:55.108: INFO: (8) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 23.58718ms) +Aug 3 07:16:55.108: INFO: (8) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 23.582937ms) +Aug 3 07:16:55.108: INFO: (8) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname2/proxy/: bar (200; 23.629222ms) +Aug 3 07:16:55.108: INFO: (8) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname1/proxy/: foo (200; 23.970132ms) +Aug 3 07:16:55.108: INFO: (8) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname2/proxy/: tls qux (200; 24.148121ms) +Aug 3 07:16:55.108: INFO: (8) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname1/proxy/: tls baz (200; 23.80553ms) +Aug 3 07:16:55.108: INFO: (8) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname2/proxy/: bar (200; 23.887529ms) +Aug 3 07:16:55.109: INFO: (8) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname1/proxy/: foo (200; 23.943398ms) +Aug 3 07:16:55.118: INFO: (9) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 8.337674ms) +Aug 3 07:16:55.121: INFO: (9) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:1080/proxy/: test<... (200; 11.763857ms) +Aug 3 07:16:55.121: INFO: (9) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:460/proxy/: tls baz (200; 11.769992ms) +Aug 3 07:16:55.124: INFO: (9) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:1080/proxy/: ... (200; 14.311146ms) +Aug 3 07:16:55.124: INFO: (9) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 14.426809ms) +Aug 3 07:16:55.124: INFO: (9) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname1/proxy/: foo (200; 14.593607ms) +Aug 3 07:16:55.124: INFO: (9) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2/proxy/: test (200; 14.701822ms) +Aug 3 07:16:55.124: INFO: (9) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:462/proxy/: tls qux (200; 14.565645ms) +Aug 3 07:16:55.124: INFO: (9) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname2/proxy/: bar (200; 14.481963ms) +Aug 3 07:16:55.124: INFO: (9) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:443/proxy/: ... (200; 12.017401ms) +Aug 3 07:16:55.138: INFO: (10) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname2/proxy/: tls qux (200; 12.092452ms) +Aug 3 07:16:55.138: INFO: (10) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 11.999362ms) +Aug 3 07:16:55.138: INFO: (10) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname1/proxy/: foo (200; 12.361698ms) +Aug 3 07:16:55.138: INFO: (10) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:462/proxy/: tls qux (200; 11.969191ms) +Aug 3 07:16:55.138: INFO: (10) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2/proxy/: test (200; 12.655318ms) +Aug 3 07:16:55.138: INFO: (10) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 12.305582ms) +Aug 3 07:16:55.138: INFO: (10) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:1080/proxy/: test<... (200; 13.108422ms) +Aug 3 07:16:55.138: INFO: (10) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 12.409337ms) +Aug 3 07:16:55.140: INFO: (10) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:460/proxy/: tls baz (200; 14.501207ms) +Aug 3 07:16:55.150: INFO: (10) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname2/proxy/: bar (200; 23.901082ms) +Aug 3 07:16:55.151: INFO: (10) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname1/proxy/: foo (200; 25.103264ms) +Aug 3 07:16:55.151: INFO: (10) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname2/proxy/: bar (200; 25.116685ms) +Aug 3 07:16:55.153: INFO: (10) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname1/proxy/: tls baz (200; 26.985434ms) +Aug 3 07:16:55.163: INFO: (11) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2/proxy/: test (200; 9.05588ms) +Aug 3 07:16:55.168: INFO: (11) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:1080/proxy/: ... (200; 14.232901ms) +Aug 3 07:16:55.168: INFO: (11) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 13.861239ms) +Aug 3 07:16:55.168: INFO: (11) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 13.679577ms) +Aug 3 07:16:55.168: INFO: (11) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 13.744546ms) +Aug 3 07:16:55.168: INFO: (11) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname1/proxy/: foo (200; 14.927248ms) +Aug 3 07:16:55.168: INFO: (11) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:443/proxy/: test<... (200; 14.545888ms) +Aug 3 07:16:55.169: INFO: (11) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 14.845993ms) +Aug 3 07:16:55.172: INFO: (11) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname2/proxy/: bar (200; 17.341137ms) +Aug 3 07:16:55.172: INFO: (11) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname1/proxy/: foo (200; 18.890993ms) +Aug 3 07:16:55.173: INFO: (11) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname2/proxy/: bar (200; 19.147212ms) +Aug 3 07:16:55.173: INFO: (11) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname1/proxy/: tls baz (200; 18.458976ms) +Aug 3 07:16:55.173: INFO: (11) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname2/proxy/: tls qux (200; 18.908919ms) +Aug 3 07:16:55.184: INFO: (12) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 10.90752ms) +Aug 3 07:16:55.184: INFO: (12) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:1080/proxy/: ... (200; 10.391906ms) +Aug 3 07:16:55.184: INFO: (12) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 10.36538ms) +Aug 3 07:16:55.184: INFO: (12) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:460/proxy/: tls baz (200; 10.636913ms) +Aug 3 07:16:55.184: INFO: (12) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 10.2286ms) +Aug 3 07:16:55.184: INFO: (12) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:443/proxy/: test<... (200; 10.69621ms) +Aug 3 07:16:55.184: INFO: (12) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2/proxy/: test (200; 11.0042ms) +Aug 3 07:16:55.185: INFO: (12) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:462/proxy/: tls qux (200; 10.830368ms) +Aug 3 07:16:55.185: INFO: (12) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname2/proxy/: tls qux (200; 11.330059ms) +Aug 3 07:16:55.187: INFO: (12) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname1/proxy/: foo (200; 13.12204ms) +Aug 3 07:16:55.189: INFO: (12) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname1/proxy/: foo (200; 14.827177ms) +Aug 3 07:16:55.189: INFO: (12) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname2/proxy/: bar (200; 14.569408ms) +Aug 3 07:16:55.189: INFO: (12) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname2/proxy/: bar (200; 14.770205ms) +Aug 3 07:16:55.189: INFO: (12) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname1/proxy/: tls baz (200; 14.726151ms) +Aug 3 07:16:55.205: INFO: (13) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 15.674585ms) +Aug 3 07:16:55.205: INFO: (13) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2/proxy/: test (200; 15.655882ms) +Aug 3 07:16:55.205: INFO: (13) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 16.05715ms) +Aug 3 07:16:55.205: INFO: (13) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:462/proxy/: tls qux (200; 15.896997ms) +Aug 3 07:16:55.205: INFO: (13) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:460/proxy/: tls baz (200; 15.737979ms) +Aug 3 07:16:55.205: INFO: (13) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:1080/proxy/: test<... (200; 15.964324ms) +Aug 3 07:16:55.205: INFO: (13) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:443/proxy/: ... (200; 15.873693ms) +Aug 3 07:16:55.208: INFO: (13) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname2/proxy/: bar (200; 18.476681ms) +Aug 3 07:16:55.210: INFO: (13) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname1/proxy/: foo (200; 20.911872ms) +Aug 3 07:16:55.210: INFO: (13) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname2/proxy/: tls qux (200; 21.16555ms) +Aug 3 07:16:55.210: INFO: (13) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname1/proxy/: foo (200; 21.026129ms) +Aug 3 07:16:55.210: INFO: (13) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname1/proxy/: tls baz (200; 21.11353ms) +Aug 3 07:16:55.216: INFO: (14) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:462/proxy/: tls qux (200; 5.494067ms) +Aug 3 07:16:55.220: INFO: (14) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2/proxy/: test (200; 8.250231ms) +Aug 3 07:16:55.220: INFO: (14) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 8.648107ms) +Aug 3 07:16:55.220: INFO: (14) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 8.778182ms) +Aug 3 07:16:55.220: INFO: (14) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 9.00649ms) +Aug 3 07:16:55.220: INFO: (14) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:460/proxy/: tls baz (200; 7.961713ms) +Aug 3 07:16:55.220: INFO: (14) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:1080/proxy/: test<... (200; 8.329549ms) +Aug 3 07:16:55.220: INFO: (14) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:443/proxy/: ... (200; 9.080842ms) +Aug 3 07:16:55.222: INFO: (14) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname1/proxy/: tls baz (200; 11.720912ms) +Aug 3 07:16:55.223: INFO: (14) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname1/proxy/: foo (200; 11.45024ms) +Aug 3 07:16:55.223: INFO: (14) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname2/proxy/: bar (200; 12.12038ms) +Aug 3 07:16:55.223: INFO: (14) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname1/proxy/: foo (200; 11.422786ms) +Aug 3 07:16:55.224: INFO: (14) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname2/proxy/: tls qux (200; 12.532282ms) +Aug 3 07:16:55.224: INFO: (14) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname2/proxy/: bar (200; 12.421498ms) +Aug 3 07:16:55.236: INFO: (15) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 11.518268ms) +Aug 3 07:16:55.236: INFO: (15) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:460/proxy/: tls baz (200; 12.094534ms) +Aug 3 07:16:55.236: INFO: (15) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:1080/proxy/: test<... (200; 12.157611ms) +Aug 3 07:16:55.236: INFO: (15) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:462/proxy/: tls qux (200; 12.029687ms) +Aug 3 07:16:55.237: INFO: (15) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 12.595303ms) +Aug 3 07:16:55.237: INFO: (15) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname2/proxy/: bar (200; 12.936195ms) +Aug 3 07:16:55.237: INFO: (15) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 12.585069ms) +Aug 3 07:16:55.237: INFO: (15) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2/proxy/: test (200; 12.89862ms) +Aug 3 07:16:55.237: INFO: (15) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname1/proxy/: foo (200; 13.007334ms) +Aug 3 07:16:55.237: INFO: (15) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname2/proxy/: tls qux (200; 13.059355ms) +Aug 3 07:16:55.237: INFO: (15) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname1/proxy/: tls baz (200; 12.990246ms) +Aug 3 07:16:55.237: INFO: (15) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname1/proxy/: foo (200; 13.201179ms) +Aug 3 07:16:55.238: INFO: (15) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 13.349173ms) +Aug 3 07:16:55.238: INFO: (15) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:1080/proxy/: ... (200; 13.449061ms) +Aug 3 07:16:55.238: INFO: (15) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname2/proxy/: bar (200; 13.391772ms) +Aug 3 07:16:55.238: INFO: (15) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:443/proxy/: test (200; 11.784151ms) +Aug 3 07:16:55.251: INFO: (16) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:443/proxy/: ... (200; 22.147742ms) +Aug 3 07:16:55.261: INFO: (16) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 21.992668ms) +Aug 3 07:16:55.261: INFO: (16) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:1080/proxy/: test<... (200; 22.239132ms) +Aug 3 07:16:55.261: INFO: (16) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 22.156175ms) +Aug 3 07:16:55.262: INFO: (16) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname2/proxy/: bar (200; 22.756684ms) +Aug 3 07:16:55.262: INFO: (16) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname2/proxy/: bar (200; 23.042396ms) +Aug 3 07:16:55.262: INFO: (16) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname1/proxy/: tls baz (200; 22.791239ms) +Aug 3 07:16:55.262: INFO: (16) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname1/proxy/: foo (200; 23.218227ms) +Aug 3 07:16:55.262: INFO: (16) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname2/proxy/: tls qux (200; 23.001099ms) +Aug 3 07:16:55.278: INFO: (17) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2/proxy/: test (200; 15.294458ms) +Aug 3 07:16:55.278: INFO: (17) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 15.217525ms) +Aug 3 07:16:55.278: INFO: (17) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 15.210973ms) +Aug 3 07:16:55.278: INFO: (17) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 15.920929ms) +Aug 3 07:16:55.278: INFO: (17) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:1080/proxy/: ... (200; 15.161567ms) +Aug 3 07:16:55.278: INFO: (17) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 15.910846ms) +Aug 3 07:16:55.278: INFO: (17) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:1080/proxy/: test<... (200; 15.578165ms) +Aug 3 07:16:55.278: INFO: (17) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:460/proxy/: tls baz (200; 15.138912ms) +Aug 3 07:16:55.278: INFO: (17) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:462/proxy/: tls qux (200; 15.797606ms) +Aug 3 07:16:55.279: INFO: (17) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:443/proxy/: ... (200; 13.993105ms) +Aug 3 07:16:55.299: INFO: (18) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 14.259714ms) +Aug 3 07:16:55.299: INFO: (18) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:443/proxy/: test (200; 15.674114ms) +Aug 3 07:16:55.301: INFO: (18) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:1080/proxy/: test<... (200; 15.624587ms) +Aug 3 07:16:55.303: INFO: (18) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 17.67461ms) +Aug 3 07:16:55.303: INFO: (18) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname2/proxy/: tls qux (200; 18.230318ms) +Aug 3 07:16:55.303: INFO: (18) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 18.057426ms) +Aug 3 07:16:55.306: INFO: (18) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname2/proxy/: bar (200; 20.468804ms) +Aug 3 07:16:55.306: INFO: (18) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname2/proxy/: bar (200; 21.322749ms) +Aug 3 07:16:55.307: INFO: (18) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname1/proxy/: foo (200; 22.272354ms) +Aug 3 07:16:55.307: INFO: (18) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname1/proxy/: foo (200; 22.035792ms) +Aug 3 07:16:55.308: INFO: (18) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname1/proxy/: tls baz (200; 22.513447ms) +Aug 3 07:16:55.327: INFO: (19) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname1/proxy/: foo (200; 17.062494ms) +Aug 3 07:16:55.327: INFO: (19) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:462/proxy/: tls qux (200; 16.91935ms) +Aug 3 07:16:55.327: INFO: (19) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 16.723658ms) +Aug 3 07:16:55.327: INFO: (19) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 17.043174ms) +Aug 3 07:16:55.327: INFO: (19) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:162/proxy/: bar (200; 17.531514ms) +Aug 3 07:16:55.328: INFO: (19) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:1080/proxy/: test<... (200; 18.28072ms) +Aug 3 07:16:55.328: INFO: (19) /api/v1/namespaces/proxy-5131/pods/http:proxy-service-xtqv8-6pvg2:1080/proxy/: ... (200; 18.516366ms) +Aug 3 07:16:55.328: INFO: (19) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:460/proxy/: tls baz (200; 18.279487ms) +Aug 3 07:16:55.328: INFO: (19) /api/v1/namespaces/proxy-5131/pods/https:proxy-service-xtqv8-6pvg2:443/proxy/: test (200; 18.448678ms) +Aug 3 07:16:55.328: INFO: (19) /api/v1/namespaces/proxy-5131/pods/proxy-service-xtqv8-6pvg2:160/proxy/: foo (200; 18.327971ms) +Aug 3 07:16:55.332: INFO: (19) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname1/proxy/: tls baz (200; 22.341889ms) +Aug 3 07:16:55.337: INFO: (19) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname1/proxy/: foo (200; 27.908767ms) +Aug 3 07:16:55.338: INFO: (19) /api/v1/namespaces/proxy-5131/services/https:proxy-service-xtqv8:tlsportname2/proxy/: tls qux (200; 28.173675ms) +Aug 3 07:16:55.338: INFO: (19) /api/v1/namespaces/proxy-5131/services/proxy-service-xtqv8:portname2/proxy/: bar (200; 28.54003ms) +Aug 3 07:16:55.338: INFO: (19) /api/v1/namespaces/proxy-5131/services/http:proxy-service-xtqv8:portname2/proxy/: bar (200; 29.031595ms) +STEP: deleting ReplicationController proxy-service-xtqv8 in namespace proxy-5131, will wait for the garbage collector to delete the pods +Aug 3 07:16:55.419: INFO: Deleting ReplicationController proxy-service-xtqv8 took: 19.957312ms +Aug 3 07:16:55.519: INFO: Terminating ReplicationController proxy-service-xtqv8 pods took: 100.71303ms +[AfterEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:17:00.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-5131" for this suite. + +• [SLOW TEST:10.807 seconds] +[sig-network] Proxy +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 + should proxy through a service and a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":346,"completed":158,"skipped":2812,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should patch a secret [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:17:00.471: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should patch a secret [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a secret +STEP: listing secrets in all namespaces to ensure that there are more than zero +STEP: patching the secret +STEP: deleting the secret using a LabelSelector +STEP: listing secrets in all namespaces, searching for label name and value in patch +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:17:00.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-437" for this suite. +•{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":346,"completed":159,"skipped":2825,"failed":0} + +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:17:00.607: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 +STEP: Creating service test in namespace statefulset-6855 +[It] should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating statefulset ss in namespace statefulset-6855 +Aug 3 07:17:00.704: INFO: Found 0 stateful pods, waiting for 1 +Aug 3 07:17:10.725: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the statefulset Spec.Replicas was modified +STEP: Patch a scale subresource +STEP: verifying the statefulset Spec.Replicas was modified +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 +Aug 3 07:17:10.795: INFO: Deleting all statefulset in ns statefulset-6855 +Aug 3 07:17:10.801: INFO: Scaling statefulset ss to 0 +Aug 3 07:17:20.887: INFO: Waiting for statefulset status.replicas updated to 0 +Aug 3 07:17:20.892: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:17:20.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-6855" for this suite. + +• [SLOW TEST:20.338 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 + should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":346,"completed":160,"skipped":2825,"failed":0} +S +------------------------------ +[sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces + should list and delete a collection of PodDisruptionBudgets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:17:20.946: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename disruption +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[BeforeEach] Listing PodDisruptionBudgets for all namespaces + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:17:21.008: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename disruption-2 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should list and delete a collection of PodDisruptionBudgets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be processed +STEP: listing a collection of PDBs across all namespaces +STEP: listing a collection of PDBs in namespace disruption-3258 +STEP: deleting a collection of PDBs +STEP: Waiting for the PDB collection to be deleted +[AfterEach] Listing PodDisruptionBudgets for all namespaces + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:17:27.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-2-7195" for this suite. +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:17:27.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-3258" for this suite. + +• [SLOW TEST:6.324 seconds] +[sig-apps] DisruptionController +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Listing PodDisruptionBudgets for all namespaces + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:75 + should list and delete a collection of PodDisruptionBudgets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":346,"completed":161,"skipped":2826,"failed":0} +SSS +------------------------------ +[sig-cli] Kubectl client Kubectl cluster-info + should check if Kubernetes control plane services is included in cluster-info [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:17:27.270: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if Kubernetes control plane services is included in cluster-info [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: validating cluster-info +Aug 3 07:17:27.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-8639 cluster-info' +Aug 3 07:17:27.446: INFO: stderr: "" +Aug 3 07:17:27.446: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.31.0.1:443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:17:27.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8639" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":346,"completed":162,"skipped":2829,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with different stored version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:17:27.473: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 3 07:17:28.011: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Aug 3 07:17:30.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 17, 27, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 17, 27, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 17, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 17, 27, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 07:17:32.063: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 17, 27, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 17, 27, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 17, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 17, 27, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 3 07:17:35.066: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with different stored version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:17:35.080: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6000-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource while v1 is storage version +STEP: Patching Custom Resource Definition to set v2 as storage +STEP: Patching the custom resource while v2 is storage version +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:17:38.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-8607" for this suite. +STEP: Destroying namespace "webhook-8607-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:11.067 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should mutate custom resource with different stored version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":346,"completed":163,"skipped":2878,"failed":0} +SSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:17:38.541: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Aug 3 07:17:38.713: INFO: Waiting up to 5m0s for pod "pod-45afe2bc-af6d-466a-b691-f8fbe0445f3b" in namespace "emptydir-3923" to be "Succeeded or Failed" +Aug 3 07:17:38.752: INFO: Pod "pod-45afe2bc-af6d-466a-b691-f8fbe0445f3b": Phase="Pending", Reason="", readiness=false. Elapsed: 38.284495ms +Aug 3 07:17:40.773: INFO: Pod "pod-45afe2bc-af6d-466a-b691-f8fbe0445f3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059553879s +Aug 3 07:17:42.794: INFO: Pod "pod-45afe2bc-af6d-466a-b691-f8fbe0445f3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080775106s +STEP: Saw pod success +Aug 3 07:17:42.794: INFO: Pod "pod-45afe2bc-af6d-466a-b691-f8fbe0445f3b" satisfied condition "Succeeded or Failed" +Aug 3 07:17:42.801: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-45afe2bc-af6d-466a-b691-f8fbe0445f3b container test-container: +STEP: delete the pod +Aug 3 07:17:42.835: INFO: Waiting for pod pod-45afe2bc-af6d-466a-b691-f8fbe0445f3b to disappear +Aug 3 07:17:42.843: INFO: Pod pod-45afe2bc-af6d-466a-b691-f8fbe0445f3b no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:17:42.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3923" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":164,"skipped":2884,"failed":0} +SS +------------------------------ +[sig-apps] Daemon set [Serial] + should rollback without unnecessary restarts [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:17:42.864: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:143 +[It] should rollback without unnecessary restarts [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:17:42.984: INFO: Create a RollingUpdate DaemonSet +Aug 3 07:17:42.995: INFO: Check that daemon pods launch on every node of the cluster +Aug 3 07:17:43.009: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:43.010: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:43.010: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:43.025: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 07:17:43.025: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 07:17:44.049: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:44.049: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:44.049: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:44.066: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 07:17:44.066: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 07:17:45.040: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:45.040: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:45.040: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:45.056: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 07:17:45.056: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 07:17:46.039: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:46.039: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:46.039: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:46.045: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 07:17:46.045: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 07:17:47.037: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:47.038: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:47.038: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:47.044: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 07:17:47.044: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 07:17:48.040: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:48.040: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:48.040: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:48.046: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Aug 3 07:17:48.046: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +Aug 3 07:17:48.046: INFO: Update the DaemonSet to trigger a rollout +Aug 3 07:17:48.065: INFO: Updating DaemonSet daemon-set +Aug 3 07:17:53.101: INFO: Roll back the DaemonSet before rollout is complete +Aug 3 07:17:53.119: INFO: Updating DaemonSet daemon-set +Aug 3 07:17:53.120: INFO: Make sure DaemonSet rollback is complete +Aug 3 07:17:53.128: INFO: Wrong image for pod: daemon-set-gxkg4. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2, got: foo:non-existent. +Aug 3 07:17:53.128: INFO: Pod daemon-set-gxkg4 is not available +Aug 3 07:17:53.142: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:53.142: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:53.142: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:54.159: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:54.160: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:54.160: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:55.168: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:55.168: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:55.168: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:56.164: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:56.164: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:56.164: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:57.167: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:57.167: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:57.167: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:58.163: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:58.163: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:58.163: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:59.158: INFO: Pod daemon-set-d6czp is not available +Aug 3 07:17:59.171: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:59.171: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 07:17:59.172: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:109 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5350, will wait for the garbage collector to delete the pods +Aug 3 07:17:59.266: INFO: Deleting DaemonSet.extensions daemon-set took: 17.706692ms +Aug 3 07:17:59.368: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.268855ms +Aug 3 07:18:05.677: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 07:18:05.677: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Aug 3 07:18:05.686: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"622005"},"items":null} + +Aug 3 07:18:05.693: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"622005"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:18:05.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-5350" for this suite. + +• [SLOW TEST:22.869 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should rollback without unnecessary restarts [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":346,"completed":165,"skipped":2886,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:18:05.735: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename job +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a job +STEP: Ensuring active pods == parallelism +STEP: Orphaning one of the Job's Pods +Aug 3 07:18:12.348: INFO: Successfully updated pod "adopt-release-nf552" +STEP: Checking that the Job readopts the Pod +Aug 3 07:18:12.348: INFO: Waiting up to 15m0s for pod "adopt-release-nf552" in namespace "job-3494" to be "adopted" +Aug 3 07:18:12.355: INFO: Pod "adopt-release-nf552": Phase="Running", Reason="", readiness=true. Elapsed: 6.689739ms +Aug 3 07:18:14.366: INFO: Pod "adopt-release-nf552": Phase="Running", Reason="", readiness=true. Elapsed: 2.017845814s +Aug 3 07:18:14.366: INFO: Pod "adopt-release-nf552" satisfied condition "adopted" +STEP: Removing the labels from the Job's Pod +Aug 3 07:18:14.892: INFO: Successfully updated pod "adopt-release-nf552" +STEP: Checking that the Job releases the Pod +Aug 3 07:18:14.892: INFO: Waiting up to 15m0s for pod "adopt-release-nf552" in namespace "job-3494" to be "released" +Aug 3 07:18:14.900: INFO: Pod "adopt-release-nf552": Phase="Running", Reason="", readiness=true. Elapsed: 7.477922ms +Aug 3 07:18:16.913: INFO: Pod "adopt-release-nf552": Phase="Running", Reason="", readiness=true. Elapsed: 2.020907864s +Aug 3 07:18:16.913: INFO: Pod "adopt-release-nf552" satisfied condition "released" +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:18:16.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-3494" for this suite. + +• [SLOW TEST:11.209 seconds] +[sig-apps] Job +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":346,"completed":166,"skipped":2940,"failed":0} +[sig-cli] Kubectl client Kubectl version + should check is all data is printed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:18:16.944: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check is all data is printed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:18:17.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-1346 version' +Aug 3 07:18:17.118: INFO: stderr: "" +Aug 3 07:18:17.118: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"23\", GitVersion:\"v1.23.3\", GitCommit:\"816c97ab8cff8a1c72eccca1026f7820e93e0d25\", GitTreeState:\"clean\", BuildDate:\"2022-01-25T21:25:17Z\", GoVersion:\"go1.17.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"23\", GitVersion:\"v1.23.3\", GitCommit:\"816c97ab8cff8a1c72eccca1026f7820e93e0d25\", GitTreeState:\"clean\", BuildDate:\"2022-01-25T21:19:12Z\", GoVersion:\"go1.17.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:18:17.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-1346" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":346,"completed":167,"skipped":2940,"failed":0} + +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:18:17.161: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Aug 3 07:18:17.282: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0adcbc1-b583-45e0-a841-d27380341262" in namespace "projected-1101" to be "Succeeded or Failed" +Aug 3 07:18:17.299: INFO: Pod "downwardapi-volume-f0adcbc1-b583-45e0-a841-d27380341262": Phase="Pending", Reason="", readiness=false. Elapsed: 16.278006ms +Aug 3 07:18:19.316: INFO: Pod "downwardapi-volume-f0adcbc1-b583-45e0-a841-d27380341262": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03345045s +Aug 3 07:18:21.334: INFO: Pod "downwardapi-volume-f0adcbc1-b583-45e0-a841-d27380341262": Phase="Running", Reason="", readiness=true. Elapsed: 4.051465439s +Aug 3 07:18:23.350: INFO: Pod "downwardapi-volume-f0adcbc1-b583-45e0-a841-d27380341262": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06733734s +STEP: Saw pod success +Aug 3 07:18:23.350: INFO: Pod "downwardapi-volume-f0adcbc1-b583-45e0-a841-d27380341262" satisfied condition "Succeeded or Failed" +Aug 3 07:18:23.355: INFO: Trying to get logs from node dce-10-6-213-50 pod downwardapi-volume-f0adcbc1-b583-45e0-a841-d27380341262 container client-container: +STEP: delete the pod +Aug 3 07:18:23.390: INFO: Waiting for pod downwardapi-volume-f0adcbc1-b583-45e0-a841-d27380341262 to disappear +Aug 3 07:18:23.398: INFO: Pod downwardapi-volume-f0adcbc1-b583-45e0-a841-d27380341262 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:18:23.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1101" for this suite. + +• [SLOW TEST:6.257 seconds] +[sig-storage] Projected downwardAPI +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":168,"skipped":2940,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a read only busybox container + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:18:23.420: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:18:23.523: INFO: The status of Pod busybox-readonly-fsf747241b-1ce4-4800-9d05-1514655b3435 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:18:25.538: INFO: The status of Pod busybox-readonly-fsf747241b-1ce4-4800-9d05-1514655b3435 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:18:27.544: INFO: The status of Pod busybox-readonly-fsf747241b-1ce4-4800-9d05-1514655b3435 is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:18:27.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-9462" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":169,"skipped":2995,"failed":0} +SSSSSS +------------------------------ +[sig-node] Probing container + should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:18:27.608: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 +[It] should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod liveness-f1c9322c-2201-4aee-bd0e-b56fa689badf in namespace container-probe-9116 +Aug 3 07:18:33.730: INFO: Started pod liveness-f1c9322c-2201-4aee-bd0e-b56fa689badf in namespace container-probe-9116 +STEP: checking the pod's current state and verifying that restartCount is present +Aug 3 07:18:33.737: INFO: Initial restart count of pod liveness-f1c9322c-2201-4aee-bd0e-b56fa689badf is 0 +Aug 3 07:18:49.858: INFO: Restart count of pod container-probe-9116/liveness-f1c9322c-2201-4aee-bd0e-b56fa689badf is now 1 (16.121391589s elapsed) +Aug 3 07:19:10.008: INFO: Restart count of pod container-probe-9116/liveness-f1c9322c-2201-4aee-bd0e-b56fa689badf is now 2 (36.271052703s elapsed) +Aug 3 07:19:30.161: INFO: Restart count of pod container-probe-9116/liveness-f1c9322c-2201-4aee-bd0e-b56fa689badf is now 3 (56.424157961s elapsed) +Aug 3 07:19:50.305: INFO: Restart count of pod container-probe-9116/liveness-f1c9322c-2201-4aee-bd0e-b56fa689badf is now 4 (1m16.568115798s elapsed) +Aug 3 07:20:50.745: INFO: Restart count of pod container-probe-9116/liveness-f1c9322c-2201-4aee-bd0e-b56fa689badf is now 5 (2m17.008402616s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:20:50.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-9116" for this suite. + +• [SLOW TEST:143.187 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":346,"completed":170,"skipped":3001,"failed":0} +[sig-storage] Projected configMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:20:50.796: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-16248f0b-b637-4405-92d2-8eeb7c4ba801 +STEP: Creating a pod to test consume configMaps +Aug 3 07:20:50.870: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-727a2bbc-64d8-4613-9b01-451794c1df55" in namespace "projected-1065" to be "Succeeded or Failed" +Aug 3 07:20:50.882: INFO: Pod "pod-projected-configmaps-727a2bbc-64d8-4613-9b01-451794c1df55": Phase="Pending", Reason="", readiness=false. Elapsed: 12.359069ms +Aug 3 07:20:52.902: INFO: Pod "pod-projected-configmaps-727a2bbc-64d8-4613-9b01-451794c1df55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032330634s +Aug 3 07:20:54.919: INFO: Pod "pod-projected-configmaps-727a2bbc-64d8-4613-9b01-451794c1df55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049773412s +STEP: Saw pod success +Aug 3 07:20:54.919: INFO: Pod "pod-projected-configmaps-727a2bbc-64d8-4613-9b01-451794c1df55" satisfied condition "Succeeded or Failed" +Aug 3 07:20:54.930: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-projected-configmaps-727a2bbc-64d8-4613-9b01-451794c1df55 container projected-configmap-volume-test: +STEP: delete the pod +Aug 3 07:20:55.048: INFO: Waiting for pod pod-projected-configmaps-727a2bbc-64d8-4613-9b01-451794c1df55 to disappear +Aug 3 07:20:55.056: INFO: Pod pod-projected-configmaps-727a2bbc-64d8-4613-9b01-451794c1df55 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:20:55.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1065" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":346,"completed":171,"skipped":3001,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Guestbook application + should create and stop a working application [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:20:55.078: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should create and stop a working application [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating all guestbook components +Aug 3 07:20:55.142: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-replica + labels: + app: agnhost + role: replica + tier: backend +spec: + ports: + - port: 6379 + selector: + app: agnhost + role: replica + tier: backend + +Aug 3 07:20:55.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-9896 create -f -' +Aug 3 07:20:56.510: INFO: stderr: "" +Aug 3 07:20:56.510: INFO: stdout: "service/agnhost-replica created\n" +Aug 3 07:20:56.511: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-primary + labels: + app: agnhost + role: primary + tier: backend +spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: agnhost + role: primary + tier: backend + +Aug 3 07:20:56.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-9896 create -f -' +Aug 3 07:20:57.955: INFO: stderr: "" +Aug 3 07:20:57.955: INFO: stdout: "service/agnhost-primary created\n" +Aug 3 07:20:57.955: INFO: apiVersion: v1 +kind: Service +metadata: + name: frontend + labels: + app: guestbook + tier: frontend +spec: + # if your cluster supports it, uncomment the following to automatically create + # an external load-balanced IP for the frontend service. + # type: LoadBalancer + ports: + - port: 80 + selector: + app: guestbook + tier: frontend + +Aug 3 07:20:57.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-9896 create -f -' +Aug 3 07:20:58.298: INFO: stderr: "" +Aug 3 07:20:58.299: INFO: stdout: "service/frontend created\n" +Aug 3 07:20:58.299: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: frontend +spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + tier: frontend + template: + metadata: + labels: + app: guestbook + tier: frontend + spec: + containers: + - name: guestbook-frontend + image: k8s.gcr.io/e2e-test-images/agnhost:2.33 + args: [ "guestbook", "--backend-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 80 + +Aug 3 07:20:58.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-9896 create -f -' +Aug 3 07:20:59.594: INFO: stderr: "" +Aug 3 07:20:59.594: INFO: stdout: "deployment.apps/frontend created\n" +Aug 3 07:20:59.594: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-primary +spec: + replicas: 1 + selector: + matchLabels: + app: agnhost + role: primary + tier: backend + template: + metadata: + labels: + app: agnhost + role: primary + tier: backend + spec: + containers: + - name: primary + image: k8s.gcr.io/e2e-test-images/agnhost:2.33 + args: [ "guestbook", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Aug 3 07:20:59.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-9896 create -f -' +Aug 3 07:20:59.866: INFO: stderr: "" +Aug 3 07:20:59.866: INFO: stdout: "deployment.apps/agnhost-primary created\n" +Aug 3 07:20:59.867: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-replica +spec: + replicas: 2 + selector: + matchLabels: + app: agnhost + role: replica + tier: backend + template: + metadata: + labels: + app: agnhost + role: replica + tier: backend + spec: + containers: + - name: replica + image: k8s.gcr.io/e2e-test-images/agnhost:2.33 + args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Aug 3 07:20:59.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-9896 create -f -' +Aug 3 07:21:00.184: INFO: stderr: "" +Aug 3 07:21:00.184: INFO: stdout: "deployment.apps/agnhost-replica created\n" +STEP: validating guestbook app +Aug 3 07:21:00.185: INFO: Waiting for all frontend pods to be Running. +Aug 3 07:21:05.236: INFO: Waiting for frontend to serve content. +Aug 3 07:21:06.258: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: +Aug 3 07:21:11.282: INFO: Trying to add a new entry to the guestbook. +Aug 3 07:21:11.359: INFO: Verifying that added entry can be retrieved. +STEP: using delete to clean up resources +Aug 3 07:21:11.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-9896 delete --grace-period=0 --force -f -' +Aug 3 07:21:11.539: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Aug 3 07:21:11.539: INFO: stdout: "service \"agnhost-replica\" force deleted\n" +STEP: using delete to clean up resources +Aug 3 07:21:11.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-9896 delete --grace-period=0 --force -f -' +Aug 3 07:21:11.824: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Aug 3 07:21:11.824: INFO: stdout: "service \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources +Aug 3 07:21:11.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-9896 delete --grace-period=0 --force -f -' +Aug 3 07:21:11.950: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Aug 3 07:21:11.950: INFO: stdout: "service \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Aug 3 07:21:11.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-9896 delete --grace-period=0 --force -f -' +Aug 3 07:21:12.089: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Aug 3 07:21:12.089: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Aug 3 07:21:12.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-9896 delete --grace-period=0 --force -f -' +Aug 3 07:21:12.236: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Aug 3 07:21:12.236: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources +Aug 3 07:21:12.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-9896 delete --grace-period=0 --force -f -' +Aug 3 07:21:12.384: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Aug 3 07:21:12.384: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:21:12.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-9896" for this suite. + +• [SLOW TEST:17.334 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Guestbook application + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:339 + should create and stop a working application [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":346,"completed":172,"skipped":3012,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should fail to create ConfigMap with empty key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:21:12.413: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail to create ConfigMap with empty key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap that has name configmap-test-emptyKey-b51617da-b042-408d-b71e-e9323e831717 +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:21:12.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-318" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":346,"completed":173,"skipped":3053,"failed":0} +SSSSS +------------------------------ +[sig-node] Security Context When creating a pod with readOnlyRootFilesystem + should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:21:12.524: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename security-context-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:21:12.621: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-9af70360-1fa2-464f-88f3-86fd7ce0e7a4" in namespace "security-context-test-3566" to be "Succeeded or Failed" +Aug 3 07:21:12.629: INFO: Pod "busybox-readonly-false-9af70360-1fa2-464f-88f3-86fd7ce0e7a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.186197ms +Aug 3 07:21:14.651: INFO: Pod "busybox-readonly-false-9af70360-1fa2-464f-88f3-86fd7ce0e7a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02981514s +Aug 3 07:21:16.666: INFO: Pod "busybox-readonly-false-9af70360-1fa2-464f-88f3-86fd7ce0e7a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045075432s +Aug 3 07:21:18.686: INFO: Pod "busybox-readonly-false-9af70360-1fa2-464f-88f3-86fd7ce0e7a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06547804s +Aug 3 07:21:18.687: INFO: Pod "busybox-readonly-false-9af70360-1fa2-464f-88f3-86fd7ce0e7a4" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:21:18.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-3566" for this suite. + +• [SLOW TEST:6.199 seconds] +[sig-node] Security Context +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + When creating a pod with readOnlyRootFilesystem + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 + should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":346,"completed":174,"skipped":3058,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:21:18.724: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 +[It] should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a replication controller +Aug 3 07:21:18.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-4741 create -f -' +Aug 3 07:21:19.081: INFO: stderr: "" +Aug 3 07:21:19.081: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Aug 3 07:21:19.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-4741 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Aug 3 07:21:19.203: INFO: stderr: "" +Aug 3 07:21:19.203: INFO: stdout: "update-demo-nautilus-flgpt update-demo-nautilus-xw5jr " +Aug 3 07:21:19.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-4741 get pods update-demo-nautilus-flgpt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Aug 3 07:21:19.320: INFO: stderr: "" +Aug 3 07:21:19.320: INFO: stdout: "" +Aug 3 07:21:19.320: INFO: update-demo-nautilus-flgpt is created but not running +Aug 3 07:21:24.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-4741 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Aug 3 07:21:24.435: INFO: stderr: "" +Aug 3 07:21:24.435: INFO: stdout: "update-demo-nautilus-flgpt update-demo-nautilus-xw5jr " +Aug 3 07:21:24.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-4741 get pods update-demo-nautilus-flgpt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Aug 3 07:21:24.540: INFO: stderr: "" +Aug 3 07:21:24.540: INFO: stdout: "true" +Aug 3 07:21:24.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-4741 get pods update-demo-nautilus-flgpt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Aug 3 07:21:24.644: INFO: stderr: "" +Aug 3 07:21:24.644: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" +Aug 3 07:21:24.644: INFO: validating pod update-demo-nautilus-flgpt +Aug 3 07:21:24.654: INFO: got data: { + "image": "nautilus.jpg" +} + +Aug 3 07:21:24.654: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Aug 3 07:21:24.654: INFO: update-demo-nautilus-flgpt is verified up and running +Aug 3 07:21:24.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-4741 get pods update-demo-nautilus-xw5jr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Aug 3 07:21:24.759: INFO: stderr: "" +Aug 3 07:21:24.759: INFO: stdout: "true" +Aug 3 07:21:24.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-4741 get pods update-demo-nautilus-xw5jr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Aug 3 07:21:24.883: INFO: stderr: "" +Aug 3 07:21:24.883: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" +Aug 3 07:21:24.883: INFO: validating pod update-demo-nautilus-xw5jr +Aug 3 07:21:24.898: INFO: got data: { + "image": "nautilus.jpg" +} + +Aug 3 07:21:24.898: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Aug 3 07:21:24.898: INFO: update-demo-nautilus-xw5jr is verified up and running +STEP: using delete to clean up resources +Aug 3 07:21:24.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-4741 delete --grace-period=0 --force -f -' +Aug 3 07:21:25.027: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Aug 3 07:21:25.027: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Aug 3 07:21:25.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-4741 get rc,svc -l name=update-demo --no-headers' +Aug 3 07:21:25.159: INFO: stderr: "No resources found in kubectl-4741 namespace.\n" +Aug 3 07:21:25.159: INFO: stdout: "" +Aug 3 07:21:25.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-4741 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Aug 3 07:21:25.301: INFO: stderr: "" +Aug 3 07:21:25.301: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:21:25.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-4741" for this suite. + +• [SLOW TEST:6.613 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294 + should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":346,"completed":175,"skipped":3120,"failed":0} +S +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:21:25.338: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-map-54814eb0-044f-4d77-a93c-8c640a388426 +STEP: Creating a pod to test consume secrets +Aug 3 07:21:25.426: INFO: Waiting up to 5m0s for pod "pod-secrets-75086c58-db46-4fcb-aa9b-4089a11db96d" in namespace "secrets-5934" to be "Succeeded or Failed" +Aug 3 07:21:25.433: INFO: Pod "pod-secrets-75086c58-db46-4fcb-aa9b-4089a11db96d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.85663ms +Aug 3 07:21:27.443: INFO: Pod "pod-secrets-75086c58-db46-4fcb-aa9b-4089a11db96d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016679888s +Aug 3 07:21:29.459: INFO: Pod "pod-secrets-75086c58-db46-4fcb-aa9b-4089a11db96d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03297s +STEP: Saw pod success +Aug 3 07:21:29.459: INFO: Pod "pod-secrets-75086c58-db46-4fcb-aa9b-4089a11db96d" satisfied condition "Succeeded or Failed" +Aug 3 07:21:29.469: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-secrets-75086c58-db46-4fcb-aa9b-4089a11db96d container secret-volume-test: +STEP: delete the pod +Aug 3 07:21:29.520: INFO: Waiting for pod pod-secrets-75086c58-db46-4fcb-aa9b-4089a11db96d to disappear +Aug 3 07:21:29.530: INFO: Pod pod-secrets-75086c58-db46-4fcb-aa9b-4089a11db96d no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:21:29.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-5934" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":176,"skipped":3121,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:21:29.555: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Aug 3 07:21:29.630: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Aug 3 07:21:29.661: INFO: Waiting for terminating namespaces to be deleted... +Aug 3 07:21:29.669: INFO: +Logging pods the apiserver thinks is on node dce-10-6-213-40 before test +Aug 3 07:21:29.690: INFO: dce-system-dnsservice-5fd54fd444-4b57d from dce-system started at 2022-08-03 03:54:34 +0000 UTC (1 container statuses recorded) +Aug 3 07:21:29.691: INFO: Container dce-system-dnsservice ready: true, restart count 0 +Aug 3 07:21:29.691: INFO: calico-node-ftbqq from kube-system started at 2022-08-01 07:26:41 +0000 UTC (1 container statuses recorded) +Aug 3 07:21:29.691: INFO: Container calico-node ready: true, restart count 0 +Aug 3 07:21:29.691: INFO: coredns-coredns-6b6c46d8b7-5dgzm from kube-system started at 2022-08-02 09:40:48 +0000 UTC (1 container statuses recorded) +Aug 3 07:21:29.691: INFO: Container coredns ready: true, restart count 0 +Aug 3 07:21:29.691: INFO: coredns-coredns-6b6c46d8b7-tb89f from kube-system started at 2022-08-02 09:40:48 +0000 UTC (1 container statuses recorded) +Aug 3 07:21:29.691: INFO: Container coredns ready: true, restart count 0 +Aug 3 07:21:29.691: INFO: dce-engine-htt6p from kube-system started at 2022-08-01 07:26:41 +0000 UTC (1 container statuses recorded) +Aug 3 07:21:29.691: INFO: Container dce-engine ready: true, restart count 0 +Aug 3 07:21:29.691: INFO: dce-kube-apiserver-proxy-dce-10-6-213-40 from kube-system started at 2022-08-01 07:26:27 +0000 UTC (1 container statuses recorded) +Aug 3 07:21:29.691: INFO: Container dce-kube-apiserver-proxy ready: true, restart count 0 +Aug 3 07:21:29.691: INFO: dce-parcel-agent-5xx9x from kube-system started at 2022-08-01 07:26:41 +0000 UTC (1 container statuses recorded) +Aug 3 07:21:29.691: INFO: Container dce-parcel-agent ready: true, restart count 1 +Aug 3 07:21:29.691: INFO: dce-uds-host-driver-2w76c from kube-system started at 2022-08-02 09:36:09 +0000 UTC (2 container statuses recorded) +Aug 3 07:21:29.691: INFO: Container dce-uds-csi-driver-prober ready: true, restart count 0 +Aug 3 07:21:29.691: INFO: Container metrics-collector ready: true, restart count 0 +Aug 3 07:21:29.691: INFO: dce-uds-policy-controller-6f4848f45d-8jhgc from kube-system started at 2022-08-02 09:40:48 +0000 UTC (1 container statuses recorded) +Aug 3 07:21:29.691: INFO: Container dce-uds-policy-controller ready: true, restart count 0 +Aug 3 07:21:29.691: INFO: dce-uds-snapshot-controller-7b76dc77c9-5tkg8 from kube-system started at 2022-08-02 09:40:48 +0000 UTC (1 container statuses recorded) +Aug 3 07:21:29.691: INFO: Container snapshotter ready: true, restart count 2 +Aug 3 07:21:29.691: INFO: kube-proxy-fpf4g from kube-system started at 2022-08-01 07:26:41 +0000 UTC (1 container statuses recorded) +Aug 3 07:21:29.691: INFO: Container kube-proxy ready: true, restart count 0 +Aug 3 07:21:29.691: INFO: metrics-server-55db7974f8-2jq52 from kube-system started at 2022-08-02 09:40:49 +0000 UTC (2 container statuses recorded) +Aug 3 07:21:29.691: INFO: Container metrics-server ready: true, restart count 0 +Aug 3 07:21:29.691: INFO: Container metrics-server-nanny ready: true, restart count 0 +Aug 3 07:21:29.691: INFO: node-local-dns-c7shk from kube-system started at 2022-08-02 07:46:48 +0000 UTC (1 container statuses recorded) +Aug 3 07:21:29.691: INFO: Container node-cache ready: true, restart count 0 +Aug 3 07:21:29.691: INFO: sonobuoy-systemd-logs-daemon-set-10147ad5bf5a4ba1-xplgl from sonobuoy started at 2022-08-03 06:16:15 +0000 UTC (2 container statuses recorded) +Aug 3 07:21:29.691: INFO: Container sonobuoy-worker ready: true, restart count 0 +Aug 3 07:21:29.691: INFO: Container systemd-logs ready: true, restart count 0 +Aug 3 07:21:29.691: INFO: +Logging pods the apiserver thinks is on node dce-10-6-213-50 before test +Aug 3 07:21:29.716: INFO: calico-node-s6xjf from kube-system started at 2022-08-01 07:26:47 +0000 UTC (1 container statuses recorded) +Aug 3 07:21:29.716: INFO: Container calico-node ready: true, restart count 0 +Aug 3 07:21:29.716: INFO: dce-engine-6d4wp from kube-system started at 2022-08-01 07:26:47 +0000 UTC (1 container statuses recorded) +Aug 3 07:21:29.716: INFO: Container dce-engine ready: true, restart count 0 +Aug 3 07:21:29.716: INFO: dce-kube-apiserver-proxy-dce-10-6-213-50 from kube-system started at 2022-08-01 07:26:33 +0000 UTC (1 container statuses recorded) +Aug 3 07:21:29.716: INFO: Container dce-kube-apiserver-proxy ready: true, restart count 0 +Aug 3 07:21:29.716: INFO: dce-parcel-agent-t4d24 from kube-system started at 2022-08-01 07:26:47 +0000 UTC (1 container statuses recorded) +Aug 3 07:21:29.716: INFO: Container dce-parcel-agent ready: true, restart count 0 +Aug 3 07:21:29.716: INFO: dce-uds-host-driver-nqcxc from kube-system started at 2022-08-02 09:40:52 +0000 UTC (2 container statuses recorded) +Aug 3 07:21:29.716: INFO: Container dce-uds-csi-driver-prober ready: true, restart count 0 +Aug 3 07:21:29.716: INFO: Container metrics-collector ready: true, restart count 0 +Aug 3 07:21:29.716: INFO: kube-proxy-j6g24 from kube-system started at 2022-08-01 07:26:47 +0000 UTC (1 container statuses recorded) +Aug 3 07:21:29.716: INFO: Container kube-proxy ready: true, restart count 0 +Aug 3 07:21:29.716: INFO: node-local-dns-st2fz from kube-system started at 2022-08-03 04:45:49 +0000 UTC (1 container statuses recorded) +Aug 3 07:21:29.716: INFO: Container node-cache ready: true, restart count 0 +Aug 3 07:21:29.716: INFO: sonobuoy from sonobuoy started at 2022-08-03 06:16:12 +0000 UTC (1 container statuses recorded) +Aug 3 07:21:29.716: INFO: Container kube-sonobuoy ready: true, restart count 0 +Aug 3 07:21:29.716: INFO: sonobuoy-e2e-job-eb6a0f3fa9794033 from sonobuoy started at 2022-08-03 06:16:15 +0000 UTC (2 container statuses recorded) +Aug 3 07:21:29.716: INFO: Container e2e ready: true, restart count 0 +Aug 3 07:21:29.716: INFO: Container sonobuoy-worker ready: true, restart count 0 +Aug 3 07:21:29.716: INFO: sonobuoy-systemd-logs-daemon-set-10147ad5bf5a4ba1-gxfgs from sonobuoy started at 2022-08-03 06:16:15 +0000 UTC (2 container statuses recorded) +Aug 3 07:21:29.716: INFO: Container sonobuoy-worker ready: true, restart count 0 +Aug 3 07:21:29.716: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-227f6492-dbbb-434e-acf5-da0d89a9c365 95 +STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled +STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.6.213.50 on the node which pod4 resides and expect not scheduled +STEP: removing the label kubernetes.io/e2e-227f6492-dbbb-434e-acf5-da0d89a9c365 off the node dce-10-6-213-50 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-227f6492-dbbb-434e-acf5-da0d89a9c365 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:26:37.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-5849" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 + +• [SLOW TEST:308.419 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":346,"completed":177,"skipped":3136,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:26:37.974: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-d3ec29b3-fe44-4a9e-808e-a7b4f32069f2 +STEP: Creating a pod to test consume configMaps +Aug 3 07:26:38.087: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ed55dea5-0863-4e30-9ce9-d1275983b45a" in namespace "projected-8179" to be "Succeeded or Failed" +Aug 3 07:26:38.098: INFO: Pod "pod-projected-configmaps-ed55dea5-0863-4e30-9ce9-d1275983b45a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.549488ms +Aug 3 07:26:40.123: INFO: Pod "pod-projected-configmaps-ed55dea5-0863-4e30-9ce9-d1275983b45a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035704189s +Aug 3 07:26:42.137: INFO: Pod "pod-projected-configmaps-ed55dea5-0863-4e30-9ce9-d1275983b45a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049521639s +STEP: Saw pod success +Aug 3 07:26:42.137: INFO: Pod "pod-projected-configmaps-ed55dea5-0863-4e30-9ce9-d1275983b45a" satisfied condition "Succeeded or Failed" +Aug 3 07:26:42.144: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-projected-configmaps-ed55dea5-0863-4e30-9ce9-d1275983b45a container agnhost-container: +STEP: delete the pod +Aug 3 07:26:42.253: INFO: Waiting for pod pod-projected-configmaps-ed55dea5-0863-4e30-9ce9-d1275983b45a to disappear +Aug 3 07:26:42.258: INFO: Pod pod-projected-configmaps-ed55dea5-0863-4e30-9ce9-d1275983b45a no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:26:42.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8179" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":178,"skipped":3175,"failed":0} +SSS +------------------------------ +[sig-apps] ReplicaSet + Replicaset should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:26:42.278: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +[It] Replicaset should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota +Aug 3 07:26:42.351: INFO: Pod name sample-pod: Found 0 pods out of 1 +Aug 3 07:26:47.380: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the replicaset Spec.Replicas was modified +STEP: Patch a scale subresource +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:26:47.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-3251" for this suite. + +• [SLOW TEST:5.211 seconds] +[sig-apps] ReplicaSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Replicaset should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":346,"completed":179,"skipped":3178,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context + should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:26:47.490: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename security-context +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser +Aug 3 07:26:47.577: INFO: Waiting up to 5m0s for pod "security-context-424ce62a-e00e-4ddb-a2a5-0d88082dcd3f" in namespace "security-context-7461" to be "Succeeded or Failed" +Aug 3 07:26:47.588: INFO: Pod "security-context-424ce62a-e00e-4ddb-a2a5-0d88082dcd3f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.655563ms +Aug 3 07:26:49.604: INFO: Pod "security-context-424ce62a-e00e-4ddb-a2a5-0d88082dcd3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027205922s +Aug 3 07:26:51.619: INFO: Pod "security-context-424ce62a-e00e-4ddb-a2a5-0d88082dcd3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041697409s +STEP: Saw pod success +Aug 3 07:26:51.619: INFO: Pod "security-context-424ce62a-e00e-4ddb-a2a5-0d88082dcd3f" satisfied condition "Succeeded or Failed" +Aug 3 07:26:51.624: INFO: Trying to get logs from node dce-10-6-213-50 pod security-context-424ce62a-e00e-4ddb-a2a5-0d88082dcd3f container test-container: +STEP: delete the pod +Aug 3 07:26:51.667: INFO: Waiting for pod security-context-424ce62a-e00e-4ddb-a2a5-0d88082dcd3f to disappear +Aug 3 07:26:51.672: INFO: Pod security-context-424ce62a-e00e-4ddb-a2a5-0d88082dcd3f no longer exists +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:26:51.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-7461" for this suite. +•{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":346,"completed":180,"skipped":3226,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:26:51.706: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-map-421d0f9b-ac36-4ceb-8382-e7c87bac9abf +STEP: Creating a pod to test consume configMaps +Aug 3 07:26:51.806: INFO: Waiting up to 5m0s for pod "pod-configmaps-87d3aa3d-8cf3-4355-9dca-23bf1e0ae8a4" in namespace "configmap-284" to be "Succeeded or Failed" +Aug 3 07:26:51.814: INFO: Pod "pod-configmaps-87d3aa3d-8cf3-4355-9dca-23bf1e0ae8a4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.335647ms +Aug 3 07:26:53.828: INFO: Pod "pod-configmaps-87d3aa3d-8cf3-4355-9dca-23bf1e0ae8a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021591453s +Aug 3 07:26:55.845: INFO: Pod "pod-configmaps-87d3aa3d-8cf3-4355-9dca-23bf1e0ae8a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038467624s +STEP: Saw pod success +Aug 3 07:26:55.845: INFO: Pod "pod-configmaps-87d3aa3d-8cf3-4355-9dca-23bf1e0ae8a4" satisfied condition "Succeeded or Failed" +Aug 3 07:26:55.852: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-configmaps-87d3aa3d-8cf3-4355-9dca-23bf1e0ae8a4 container agnhost-container: +STEP: delete the pod +Aug 3 07:26:55.901: INFO: Waiting for pod pod-configmaps-87d3aa3d-8cf3-4355-9dca-23bf1e0ae8a4 to disappear +Aug 3 07:26:55.920: INFO: Pod pod-configmaps-87d3aa3d-8cf3-4355-9dca-23bf1e0ae8a4 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:26:55.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-284" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":181,"skipped":3255,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context When creating a container with runAsUser + should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:26:55.948: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename security-context-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:26:56.060: INFO: Waiting up to 5m0s for pod "busybox-user-65534-c0daf1fd-dfc1-4790-8eca-a9ea9fed96d5" in namespace "security-context-test-9513" to be "Succeeded or Failed" +Aug 3 07:26:56.080: INFO: Pod "busybox-user-65534-c0daf1fd-dfc1-4790-8eca-a9ea9fed96d5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.490449ms +Aug 3 07:26:58.096: INFO: Pod "busybox-user-65534-c0daf1fd-dfc1-4790-8eca-a9ea9fed96d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036020515s +Aug 3 07:27:00.114: INFO: Pod "busybox-user-65534-c0daf1fd-dfc1-4790-8eca-a9ea9fed96d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054617846s +Aug 3 07:27:00.115: INFO: Pod "busybox-user-65534-c0daf1fd-dfc1-4790-8eca-a9ea9fed96d5" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:27:00.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-9513" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":182,"skipped":3302,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:27:00.140: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Aug 3 07:27:00.267: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bf373109-2b8e-4d18-a30e-07cbb0449db5" in namespace "downward-api-7671" to be "Succeeded or Failed" +Aug 3 07:27:00.279: INFO: Pod "downwardapi-volume-bf373109-2b8e-4d18-a30e-07cbb0449db5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.638969ms +Aug 3 07:27:02.302: INFO: Pod "downwardapi-volume-bf373109-2b8e-4d18-a30e-07cbb0449db5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03496851s +Aug 3 07:27:04.315: INFO: Pod "downwardapi-volume-bf373109-2b8e-4d18-a30e-07cbb0449db5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048539936s +Aug 3 07:27:06.334: INFO: Pod "downwardapi-volume-bf373109-2b8e-4d18-a30e-07cbb0449db5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.066786584s +STEP: Saw pod success +Aug 3 07:27:06.334: INFO: Pod "downwardapi-volume-bf373109-2b8e-4d18-a30e-07cbb0449db5" satisfied condition "Succeeded or Failed" +Aug 3 07:27:06.340: INFO: Trying to get logs from node dce-10-6-213-50 pod downwardapi-volume-bf373109-2b8e-4d18-a30e-07cbb0449db5 container client-container: +STEP: delete the pod +Aug 3 07:27:06.382: INFO: Waiting for pod downwardapi-volume-bf373109-2b8e-4d18-a30e-07cbb0449db5 to disappear +Aug 3 07:27:06.387: INFO: Pod downwardapi-volume-bf373109-2b8e-4d18-a30e-07cbb0449db5 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:27:06.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7671" for this suite. + +• [SLOW TEST:6.267 seconds] +[sig-storage] Downward API volume +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":183,"skipped":3333,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:27:06.408: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Given a Pod with a 'name' label pod-adoption-release is created +Aug 3 07:27:06.547: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:27:08.565: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:27:10.559: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:27:12.568: INFO: The status of Pod pod-adoption-release is Running (Ready = true) +STEP: When a replicaset with a matching selector is created +STEP: Then the orphan pod is adopted +STEP: When the matched label of one of its pods change +Aug 3 07:27:13.631: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:27:14.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-3348" for this suite. + +• [SLOW TEST:8.373 seconds] +[sig-apps] ReplicaSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":346,"completed":184,"skipped":3351,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should run through a ConfigMap lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:27:14.783: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run through a ConfigMap lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a ConfigMap +STEP: fetching the ConfigMap +STEP: patching the ConfigMap +STEP: listing all ConfigMaps in all namespaces with a label selector +STEP: deleting the ConfigMap by collection with a label selector +STEP: listing all ConfigMaps in test namespace +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:27:14.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-4085" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":346,"completed":185,"skipped":3443,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:27:15.019: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Aug 3 07:27:15.106: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f4220da5-0dff-4c50-9bda-3934b4636a89" in namespace "downward-api-1558" to be "Succeeded or Failed" +Aug 3 07:27:15.113: INFO: Pod "downwardapi-volume-f4220da5-0dff-4c50-9bda-3934b4636a89": Phase="Pending", Reason="", readiness=false. Elapsed: 7.531857ms +Aug 3 07:27:17.128: INFO: Pod "downwardapi-volume-f4220da5-0dff-4c50-9bda-3934b4636a89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021903082s +Aug 3 07:27:19.142: INFO: Pod "downwardapi-volume-f4220da5-0dff-4c50-9bda-3934b4636a89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036076595s +Aug 3 07:27:21.157: INFO: Pod "downwardapi-volume-f4220da5-0dff-4c50-9bda-3934b4636a89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.050732165s +STEP: Saw pod success +Aug 3 07:27:21.157: INFO: Pod "downwardapi-volume-f4220da5-0dff-4c50-9bda-3934b4636a89" satisfied condition "Succeeded or Failed" +Aug 3 07:27:21.163: INFO: Trying to get logs from node dce-10-6-213-50 pod downwardapi-volume-f4220da5-0dff-4c50-9bda-3934b4636a89 container client-container: +STEP: delete the pod +Aug 3 07:27:21.213: INFO: Waiting for pod downwardapi-volume-f4220da5-0dff-4c50-9bda-3934b4636a89 to disappear +Aug 3 07:27:21.218: INFO: Pod downwardapi-volume-f4220da5-0dff-4c50-9bda-3934b4636a89 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:27:21.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1558" for this suite. + +• [SLOW TEST:6.220 seconds] +[sig-storage] Downward API volume +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":186,"skipped":3470,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl replace + should update a single-container pod's image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:27:21.240: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl replace + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1571 +[It] should update a single-container pod's image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 +Aug 3 07:27:21.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-7499 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Aug 3 07:27:21.530: INFO: stderr: "" +Aug 3 07:27:21.530: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod is running +STEP: verifying the pod e2e-test-httpd-pod was created +Aug 3 07:27:26.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-7499 get pod e2e-test-httpd-pod -o json' +Aug 3 07:27:26.709: INFO: stderr: "" +Aug 3 07:27:26.709: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"cni.projectcalico.org/ipv4pools\": \"[\\\"default-ipv4-ippool\\\"]\",\n \"dce.daocloud.io/parcel.egress.burst\": \"0\",\n \"dce.daocloud.io/parcel.egress.rate\": \"0\",\n \"dce.daocloud.io/parcel.ingress.burst\": \"0\",\n \"dce.daocloud.io/parcel.ingress.rate\": \"0\"\n },\n \"creationTimestamp\": \"2022-08-03T07:27:21Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7499\",\n \"resourceVersion\": \"625041\",\n \"uid\": \"71d36153-836e-4fc5-95d9-f04a55029746\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-rn26r\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"dce-10-6-213-50\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-rn26r\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-08-03T07:27:21Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-08-03T07:27:25Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-08-03T07:27:25Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-08-03T07:27:21Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://00ae1c0693bc2f7218ce7f64ad343201ab24aad3edd2fa3ebce40c009c81684b\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\",\n \"imageID\": \"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2022-08-03T07:27:24Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.6.213.50\",\n \"phase\": \"Running\",\n \"podIP\": \"172.29.175.8\",\n \"podIPs\": [\n {\n \"ip\": \"172.29.175.8\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2022-08-03T07:27:21Z\"\n }\n}\n" +STEP: replace the image in the pod +Aug 3 07:27:26.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-7499 replace -f -' +Aug 3 07:27:27.046: INFO: stderr: "" +Aug 3 07:27:27.046: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-2 +[AfterEach] Kubectl replace + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1575 +Aug 3 07:27:27.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-7499 delete pods e2e-test-httpd-pod' +Aug 3 07:27:30.846: INFO: stderr: "" +Aug 3 07:27:30.846: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:27:30.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7499" for this suite. + +• [SLOW TEST:9.638 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Kubectl replace + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 + should update a single-container pod's image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":346,"completed":187,"skipped":3493,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:27:30.880: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc1 +STEP: create the rc2 +STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well +STEP: delete the rc simpletest-rc-to-be-deleted +STEP: wait for the rc to be deleted +Aug 3 07:27:43.154: INFO: 72 pods remaining +Aug 3 07:27:43.155: INFO: 72 pods has nil DeletionTimestamp +Aug 3 07:27:43.155: INFO: +STEP: Gathering metrics +Aug 3 07:27:48.275: INFO: The status of Pod dce-kube-controller-manager-dce-10-6-213-30 is Running (Ready = true) +Aug 3 07:28:48.676: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. +Aug 3 07:28:48.677: INFO: Deleting pod "simpletest-rc-to-be-deleted-2gdmh" in namespace "gc-8193" +Aug 3 07:28:48.731: INFO: Deleting pod "simpletest-rc-to-be-deleted-2pm5j" in namespace "gc-8193" +Aug 3 07:28:48.763: INFO: Deleting pod "simpletest-rc-to-be-deleted-2q4xj" in namespace "gc-8193" +Aug 3 07:28:48.785: INFO: Deleting pod "simpletest-rc-to-be-deleted-2r7v6" in namespace "gc-8193" +Aug 3 07:28:48.823: INFO: Deleting pod "simpletest-rc-to-be-deleted-2rfvq" in namespace "gc-8193" +Aug 3 07:28:48.852: INFO: Deleting pod "simpletest-rc-to-be-deleted-2wj65" in namespace "gc-8193" +Aug 3 07:28:48.873: INFO: Deleting pod "simpletest-rc-to-be-deleted-4ljcz" in namespace "gc-8193" +Aug 3 07:28:48.897: INFO: Deleting pod "simpletest-rc-to-be-deleted-4vhvp" in namespace "gc-8193" +Aug 3 07:28:48.920: INFO: Deleting pod "simpletest-rc-to-be-deleted-4zljv" in namespace "gc-8193" +Aug 3 07:28:48.946: INFO: Deleting pod "simpletest-rc-to-be-deleted-557h8" in namespace "gc-8193" +Aug 3 07:28:48.982: INFO: Deleting pod "simpletest-rc-to-be-deleted-59sws" in namespace "gc-8193" +Aug 3 07:28:49.021: INFO: Deleting pod "simpletest-rc-to-be-deleted-5jg7d" in namespace "gc-8193" +Aug 3 07:28:49.052: INFO: Deleting pod "simpletest-rc-to-be-deleted-5wv5f" in namespace "gc-8193" +Aug 3 07:28:49.104: INFO: Deleting pod "simpletest-rc-to-be-deleted-68nlg" in namespace "gc-8193" +Aug 3 07:28:49.134: INFO: Deleting pod "simpletest-rc-to-be-deleted-6bfwd" in namespace "gc-8193" +Aug 3 07:28:49.160: INFO: Deleting pod "simpletest-rc-to-be-deleted-6cvkg" in namespace "gc-8193" +Aug 3 07:28:49.187: INFO: Deleting pod "simpletest-rc-to-be-deleted-6d6p8" in namespace "gc-8193" +Aug 3 07:28:49.211: INFO: Deleting pod "simpletest-rc-to-be-deleted-6v5px" in namespace "gc-8193" +Aug 3 07:28:49.246: INFO: Deleting pod "simpletest-rc-to-be-deleted-6vq8f" in namespace "gc-8193" +Aug 3 07:28:49.278: INFO: Deleting pod "simpletest-rc-to-be-deleted-6whg4" in namespace "gc-8193" +Aug 3 07:28:49.299: INFO: Deleting pod "simpletest-rc-to-be-deleted-7gd4m" in namespace "gc-8193" +Aug 3 07:28:49.330: INFO: Deleting pod "simpletest-rc-to-be-deleted-7lslm" in namespace "gc-8193" +Aug 3 07:28:49.363: INFO: Deleting pod "simpletest-rc-to-be-deleted-7sjjr" in namespace "gc-8193" +Aug 3 07:28:49.388: INFO: Deleting pod "simpletest-rc-to-be-deleted-8679g" in namespace "gc-8193" +Aug 3 07:28:49.417: INFO: Deleting pod "simpletest-rc-to-be-deleted-8b4fl" in namespace "gc-8193" +Aug 3 07:28:49.471: INFO: Deleting pod "simpletest-rc-to-be-deleted-8f6lk" in namespace "gc-8193" +Aug 3 07:28:49.516: INFO: Deleting pod "simpletest-rc-to-be-deleted-8j4vw" in namespace "gc-8193" +Aug 3 07:28:49.575: INFO: Deleting pod "simpletest-rc-to-be-deleted-8st4d" in namespace "gc-8193" +Aug 3 07:28:49.628: INFO: Deleting pod "simpletest-rc-to-be-deleted-8t2xz" in namespace "gc-8193" +Aug 3 07:28:49.719: INFO: Deleting pod "simpletest-rc-to-be-deleted-97jzd" in namespace "gc-8193" +Aug 3 07:28:49.767: INFO: Deleting pod "simpletest-rc-to-be-deleted-9lxnx" in namespace "gc-8193" +Aug 3 07:28:49.831: INFO: Deleting pod "simpletest-rc-to-be-deleted-b68zp" in namespace "gc-8193" +Aug 3 07:28:49.857: INFO: Deleting pod "simpletest-rc-to-be-deleted-b9hrc" in namespace "gc-8193" +Aug 3 07:28:49.906: INFO: Deleting pod "simpletest-rc-to-be-deleted-bk9h4" in namespace "gc-8193" +Aug 3 07:28:49.934: INFO: Deleting pod "simpletest-rc-to-be-deleted-brmbd" in namespace "gc-8193" +Aug 3 07:28:49.969: INFO: Deleting pod "simpletest-rc-to-be-deleted-bv5zz" in namespace "gc-8193" +Aug 3 07:28:49.992: INFO: Deleting pod "simpletest-rc-to-be-deleted-bz97c" in namespace "gc-8193" +Aug 3 07:28:50.022: INFO: Deleting pod "simpletest-rc-to-be-deleted-cbqhn" in namespace "gc-8193" +Aug 3 07:28:50.074: INFO: Deleting pod "simpletest-rc-to-be-deleted-cxkd4" in namespace "gc-8193" +Aug 3 07:28:50.102: INFO: Deleting pod "simpletest-rc-to-be-deleted-czf5z" in namespace "gc-8193" +Aug 3 07:28:50.140: INFO: Deleting pod "simpletest-rc-to-be-deleted-dhgn7" in namespace "gc-8193" +Aug 3 07:28:50.186: INFO: Deleting pod "simpletest-rc-to-be-deleted-dnhx6" in namespace "gc-8193" +Aug 3 07:28:50.234: INFO: Deleting pod "simpletest-rc-to-be-deleted-dt6qz" in namespace "gc-8193" +Aug 3 07:28:50.263: INFO: Deleting pod "simpletest-rc-to-be-deleted-f54dv" in namespace "gc-8193" +Aug 3 07:28:50.297: INFO: Deleting pod "simpletest-rc-to-be-deleted-gc978" in namespace "gc-8193" +Aug 3 07:28:50.337: INFO: Deleting pod "simpletest-rc-to-be-deleted-gdxxt" in namespace "gc-8193" +Aug 3 07:28:50.386: INFO: Deleting pod "simpletest-rc-to-be-deleted-gfgpz" in namespace "gc-8193" +Aug 3 07:28:50.443: INFO: Deleting pod "simpletest-rc-to-be-deleted-hwvcx" in namespace "gc-8193" +Aug 3 07:28:50.517: INFO: Deleting pod "simpletest-rc-to-be-deleted-jdl8f" in namespace "gc-8193" +Aug 3 07:28:50.551: INFO: Deleting pod "simpletest-rc-to-be-deleted-jptrg" in namespace "gc-8193" +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:28:50.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-8193" for this suite. + +• [SLOW TEST:79.759 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":346,"completed":188,"skipped":3521,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] NoExecuteTaintManager Multiple Pods [Serial] + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:28:50.640: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename taint-multiple-pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:345 +Aug 3 07:28:50.761: INFO: Waiting up to 1m0s for all nodes to be ready +Aug 3 07:29:50.886: INFO: Waiting for terminating namespaces to be deleted... +[It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:29:50.894: INFO: Starting informer... +STEP: Starting pods... +Aug 3 07:29:51.144: INFO: Pod1 is running on dce-10-6-213-50. Tainting Node +Aug 3 07:29:55.385: INFO: Pod2 is running on dce-10-6-213-50. Tainting Node +STEP: Trying to apply a taint on the Node +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting for Pod1 and Pod2 to be deleted +Aug 3 07:30:03.438: INFO: Noticed Pod "taint-eviction-b1" gets evicted. +Aug 3 07:30:23.408: INFO: Noticed Pod "taint-eviction-b2" gets evicted. +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +[AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:30:23.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-multiple-pods-5599" for this suite. + +• [SLOW TEST:92.821 seconds] +[sig-node] NoExecuteTaintManager Multiple Pods [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":346,"completed":189,"skipped":3561,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should schedule multiple jobs concurrently [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:30:23.461: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename cronjob +STEP: Waiting for a default service account to be provisioned in namespace +[It] should schedule multiple jobs concurrently [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a cronjob +STEP: Ensuring more than one job is running at a time +STEP: Ensuring at least two running jobs exists by listing jobs explicitly +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:32:01.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-4589" for this suite. + +• [SLOW TEST:98.115 seconds] +[sig-apps] CronJob +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should schedule multiple jobs concurrently [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":346,"completed":190,"skipped":3597,"failed":0} +S +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:32:01.577: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-configmap-bgg8 +STEP: Creating a pod to test atomic-volume-subpath +Aug 3 07:32:01.693: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bgg8" in namespace "subpath-5452" to be "Succeeded or Failed" +Aug 3 07:32:01.707: INFO: Pod "pod-subpath-test-configmap-bgg8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.750419ms +Aug 3 07:32:03.715: INFO: Pod "pod-subpath-test-configmap-bgg8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02188869s +Aug 3 07:32:05.727: INFO: Pod "pod-subpath-test-configmap-bgg8": Phase="Running", Reason="", readiness=true. Elapsed: 4.033929663s +Aug 3 07:32:07.742: INFO: Pod "pod-subpath-test-configmap-bgg8": Phase="Running", Reason="", readiness=true. Elapsed: 6.048505035s +Aug 3 07:32:09.757: INFO: Pod "pod-subpath-test-configmap-bgg8": Phase="Running", Reason="", readiness=true. Elapsed: 8.06374664s +Aug 3 07:32:11.773: INFO: Pod "pod-subpath-test-configmap-bgg8": Phase="Running", Reason="", readiness=true. Elapsed: 10.079475321s +Aug 3 07:32:13.783: INFO: Pod "pod-subpath-test-configmap-bgg8": Phase="Running", Reason="", readiness=true. Elapsed: 12.089855426s +Aug 3 07:32:15.801: INFO: Pod "pod-subpath-test-configmap-bgg8": Phase="Running", Reason="", readiness=true. Elapsed: 14.107522387s +Aug 3 07:32:17.815: INFO: Pod "pod-subpath-test-configmap-bgg8": Phase="Running", Reason="", readiness=true. Elapsed: 16.121726075s +Aug 3 07:32:19.829: INFO: Pod "pod-subpath-test-configmap-bgg8": Phase="Running", Reason="", readiness=true. Elapsed: 18.136085382s +Aug 3 07:32:21.847: INFO: Pod "pod-subpath-test-configmap-bgg8": Phase="Running", Reason="", readiness=true. Elapsed: 20.153474782s +Aug 3 07:32:23.856: INFO: Pod "pod-subpath-test-configmap-bgg8": Phase="Running", Reason="", readiness=true. Elapsed: 22.163024438s +Aug 3 07:32:25.871: INFO: Pod "pod-subpath-test-configmap-bgg8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.177795417s +STEP: Saw pod success +Aug 3 07:32:25.871: INFO: Pod "pod-subpath-test-configmap-bgg8" satisfied condition "Succeeded or Failed" +Aug 3 07:32:25.876: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-subpath-test-configmap-bgg8 container test-container-subpath-configmap-bgg8: +STEP: delete the pod +Aug 3 07:32:25.940: INFO: Waiting for pod pod-subpath-test-configmap-bgg8 to disappear +Aug 3 07:32:25.944: INFO: Pod pod-subpath-test-configmap-bgg8 no longer exists +STEP: Deleting pod pod-subpath-test-configmap-bgg8 +Aug 3 07:32:25.944: INFO: Deleting pod "pod-subpath-test-configmap-bgg8" in namespace "subpath-5452" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:32:25.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-5452" for this suite. + +• [SLOW TEST:24.390 seconds] +[sig-storage] Subpath +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance]","total":346,"completed":191,"skipped":3598,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should block an eviction until the PDB is updated to allow it [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:32:25.968: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename disruption +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should block an eviction until the PDB is updated to allow it [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pdb that targets all three pods in a test replica set +STEP: Waiting for the pdb to be processed +STEP: First trying to evict a pod which shouldn't be evictable +STEP: Waiting for all pods to be running +Aug 3 07:32:28.080: INFO: pods: 0 < 3 +Aug 3 07:32:30.116: INFO: running pods: 0 < 3 +Aug 3 07:32:32.096: INFO: running pods: 2 < 3 +STEP: locating a running pod +STEP: Updating the pdb to allow a pod to be evicted +STEP: Waiting for the pdb to be processed +STEP: Trying to evict the same pod we tried earlier which should now be evictable +STEP: Waiting for all pods to be running +STEP: Waiting for the pdb to observed all healthy pods +STEP: Patching the pdb to disallow a pod to be evicted +STEP: Waiting for the pdb to be processed +STEP: Waiting for all pods to be running +Aug 3 07:32:38.257: INFO: running pods: 2 < 3 +STEP: locating a running pod +STEP: Deleting the pdb to allow a pod to be evicted +STEP: Waiting for the pdb to be deleted +STEP: Trying to evict the same pod we tried earlier which should now be evictable +STEP: Waiting for all pods to be running +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:32:40.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-6842" for this suite. + +• [SLOW TEST:14.381 seconds] +[sig-apps] DisruptionController +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should block an eviction until the PDB is updated to allow it [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":346,"completed":192,"skipped":3676,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should deny crd creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:32:40.350: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 3 07:32:41.296: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Aug 3 07:32:43.328: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 32, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 32, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 32, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 32, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 07:32:45.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 32, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 32, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 32, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 32, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 3 07:32:48.368: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should deny crd creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the crd webhook via the AdmissionRegistration API +STEP: Creating a custom resource definition that should be denied by the webhook +Aug 3 07:32:48.408: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:32:48.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-4275" for this suite. +STEP: Destroying namespace "webhook-4275-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:8.259 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should deny crd creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":346,"completed":193,"skipped":3721,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:32:48.611: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 +STEP: Creating service test in namespace statefulset-4512 +[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Initializing watcher for selector baz=blah,foo=bar +STEP: Creating stateful set ss in namespace statefulset-4512 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4512 +Aug 3 07:32:48.822: INFO: Found 0 stateful pods, waiting for 1 +Aug 3 07:32:58.843: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod +Aug 3 07:32:58.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=statefulset-4512 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Aug 3 07:32:59.136: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Aug 3 07:32:59.136: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Aug 3 07:32:59.136: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Aug 3 07:32:59.144: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Aug 3 07:33:09.174: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Aug 3 07:33:09.174: INFO: Waiting for statefulset status.replicas updated to 0 +Aug 3 07:33:09.209: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999367s +Aug 3 07:33:10.249: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.99128726s +Aug 3 07:33:11.262: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.950828898s +Aug 3 07:33:12.275: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.93871742s +Aug 3 07:33:13.295: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.925009789s +Aug 3 07:33:14.305: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.904344996s +Aug 3 07:33:15.315: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.895018689s +Aug 3 07:33:16.325: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.884879118s +Aug 3 07:33:17.335: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.875404592s +Aug 3 07:33:18.352: INFO: Verifying statefulset ss doesn't scale past 1 for another 865.701115ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4512 +Aug 3 07:33:19.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=statefulset-4512 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Aug 3 07:33:19.648: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Aug 3 07:33:19.648: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Aug 3 07:33:19.648: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Aug 3 07:33:19.657: INFO: Found 1 stateful pods, waiting for 3 +Aug 3 07:33:29.684: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Aug 3 07:33:29.684: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Aug 3 07:33:29.684: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Verifying that stateful set ss was scaled up in order +STEP: Scale down will halt with unhealthy stateful pod +Aug 3 07:33:29.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=statefulset-4512 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Aug 3 07:33:29.972: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Aug 3 07:33:29.972: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Aug 3 07:33:29.972: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Aug 3 07:33:29.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=statefulset-4512 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Aug 3 07:33:30.226: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Aug 3 07:33:30.226: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Aug 3 07:33:30.226: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Aug 3 07:33:30.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=statefulset-4512 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Aug 3 07:33:30.480: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Aug 3 07:33:30.480: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Aug 3 07:33:30.480: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Aug 3 07:33:30.480: INFO: Waiting for statefulset status.replicas updated to 0 +Aug 3 07:33:30.488: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 +Aug 3 07:33:40.521: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Aug 3 07:33:40.521: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Aug 3 07:33:40.521: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Aug 3 07:33:40.544: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999935s +Aug 3 07:33:41.555: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993323182s +Aug 3 07:33:42.566: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.981872737s +Aug 3 07:33:43.577: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.970357415s +Aug 3 07:33:44.590: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.958788233s +Aug 3 07:33:45.601: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.946227009s +Aug 3 07:33:46.611: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.936365803s +Aug 3 07:33:47.623: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.925964107s +Aug 3 07:33:48.634: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.913763344s +Aug 3 07:33:49.644: INFO: Verifying statefulset ss doesn't scale past 3 for another 902.526613ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4512 +Aug 3 07:33:50.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=statefulset-4512 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Aug 3 07:33:50.941: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Aug 3 07:33:50.941: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Aug 3 07:33:50.941: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Aug 3 07:33:50.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=statefulset-4512 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Aug 3 07:33:51.202: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Aug 3 07:33:51.202: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Aug 3 07:33:51.202: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Aug 3 07:33:51.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=statefulset-4512 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Aug 3 07:33:51.464: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Aug 3 07:33:51.464: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Aug 3 07:33:51.464: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Aug 3 07:33:51.464: INFO: Scaling statefulset ss to 0 +STEP: Verifying that stateful set ss was scaled down in reverse order +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 +Aug 3 07:34:01.520: INFO: Deleting all statefulset in ns statefulset-4512 +Aug 3 07:34:01.525: INFO: Scaling statefulset ss to 0 +Aug 3 07:34:01.556: INFO: Waiting for statefulset status.replicas updated to 0 +Aug 3 07:34:01.570: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:34:01.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-4512" for this suite. + +• [SLOW TEST:73.013 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":346,"completed":194,"skipped":3769,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing mutating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:34:01.624: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 3 07:34:02.364: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Aug 3 07:34:04.390: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 34, 2, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 34, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 34, 2, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 34, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 07:34:06.402: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 34, 2, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 34, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 34, 2, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 34, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 3 07:34:09.432: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing mutating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Listing all of the created validation webhooks +STEP: Creating a configMap that should be mutated +STEP: Deleting the collection of validation webhooks +STEP: Creating a configMap that should not be mutated +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:34:09.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1642" for this suite. +STEP: Destroying namespace "webhook-1642-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:8.288 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + listing mutating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":346,"completed":195,"skipped":3780,"failed":0} +SSSSS +------------------------------ +[sig-apps] ReplicationController + should adopt matching pods on creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:34:09.913: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should adopt matching pods on creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Given a Pod with a 'name' label pod-adoption is created +Aug 3 07:34:10.001: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:34:12.014: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:34:14.019: INFO: The status of Pod pod-adoption is Running (Ready = true) +STEP: When a replication controller with a matching selector is created +STEP: Then the orphan pod is adopted +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:34:15.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-2302" for this suite. + +• [SLOW TEST:5.156 seconds] +[sig-apps] ReplicationController +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should adopt matching pods on creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":346,"completed":196,"skipped":3785,"failed":0} +SSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl diff + should check if kubectl diff finds a difference for Deployments [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:34:15.070: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl diff finds a difference for Deployments [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create deployment with httpd image +Aug 3 07:34:15.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5804 create -f -' +Aug 3 07:34:15.439: INFO: stderr: "" +Aug 3 07:34:15.439: INFO: stdout: "deployment.apps/httpd-deployment created\n" +STEP: verify diff finds difference between live and declared image +Aug 3 07:34:15.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5804 diff -f -' +Aug 3 07:34:15.836: INFO: rc: 1 +Aug 3 07:34:15.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-5804 delete -f -' +Aug 3 07:34:15.945: INFO: stderr: "" +Aug 3 07:34:15.945: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:34:15.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5804" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":346,"completed":197,"skipped":3793,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + custom resource defaulting for requests and from storage works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:34:15.970: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +[It] custom resource defaulting for requests and from storage works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:34:16.066: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:34:19.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-9396" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":346,"completed":198,"skipped":3814,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:34:19.415: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-map-c5c51ca7-75c9-4472-8a80-833c9013855a +STEP: Creating a pod to test consume secrets +Aug 3 07:34:19.516: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fb4cd3c8-4503-463c-bcc9-d87b7fb7e042" in namespace "projected-6244" to be "Succeeded or Failed" +Aug 3 07:34:19.534: INFO: Pod "pod-projected-secrets-fb4cd3c8-4503-463c-bcc9-d87b7fb7e042": Phase="Pending", Reason="", readiness=false. Elapsed: 17.585398ms +Aug 3 07:34:21.560: INFO: Pod "pod-projected-secrets-fb4cd3c8-4503-463c-bcc9-d87b7fb7e042": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044238772s +Aug 3 07:34:23.574: INFO: Pod "pod-projected-secrets-fb4cd3c8-4503-463c-bcc9-d87b7fb7e042": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057553857s +STEP: Saw pod success +Aug 3 07:34:23.574: INFO: Pod "pod-projected-secrets-fb4cd3c8-4503-463c-bcc9-d87b7fb7e042" satisfied condition "Succeeded or Failed" +Aug 3 07:34:23.578: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-projected-secrets-fb4cd3c8-4503-463c-bcc9-d87b7fb7e042 container projected-secret-volume-test: +STEP: delete the pod +Aug 3 07:34:23.638: INFO: Waiting for pod pod-projected-secrets-fb4cd3c8-4503-463c-bcc9-d87b7fb7e042 to disappear +Aug 3 07:34:23.645: INFO: Pod pod-projected-secrets-fb4cd3c8-4503-463c-bcc9-d87b7fb7e042 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:34:23.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6244" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":199,"skipped":3831,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:34:23.665: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on node default medium +Aug 3 07:34:23.728: INFO: Waiting up to 5m0s for pod "pod-63b1bf71-e76d-4771-b956-3b9225de872e" in namespace "emptydir-4924" to be "Succeeded or Failed" +Aug 3 07:34:23.735: INFO: Pod "pod-63b1bf71-e76d-4771-b956-3b9225de872e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.552967ms +Aug 3 07:34:25.745: INFO: Pod "pod-63b1bf71-e76d-4771-b956-3b9225de872e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016790723s +Aug 3 07:34:27.761: INFO: Pod "pod-63b1bf71-e76d-4771-b956-3b9225de872e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032746123s +STEP: Saw pod success +Aug 3 07:34:27.761: INFO: Pod "pod-63b1bf71-e76d-4771-b956-3b9225de872e" satisfied condition "Succeeded or Failed" +Aug 3 07:34:27.766: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-63b1bf71-e76d-4771-b956-3b9225de872e container test-container: +STEP: delete the pod +Aug 3 07:34:27.815: INFO: Waiting for pod pod-63b1bf71-e76d-4771-b956-3b9225de872e to disappear +Aug 3 07:34:27.825: INFO: Pod pod-63b1bf71-e76d-4771-b956-3b9225de872e no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:34:27.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4924" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":200,"skipped":3855,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should serve multiport endpoints from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:34:27.850: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should serve multiport endpoints from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service multi-endpoint-test in namespace services-3891 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3891 to expose endpoints map[] +Aug 3 07:34:27.956: INFO: successfully validated that service multi-endpoint-test in namespace services-3891 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-3891 +Aug 3 07:34:27.987: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:34:30.000: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:34:32.000: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:34:34.003: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3891 to expose endpoints map[pod1:[100]] +Aug 3 07:34:34.037: INFO: successfully validated that service multi-endpoint-test in namespace services-3891 exposes endpoints map[pod1:[100]] +STEP: Creating pod pod2 in namespace services-3891 +Aug 3 07:34:34.054: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:34:36.071: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:34:38.070: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3891 to expose endpoints map[pod1:[100] pod2:[101]] +Aug 3 07:34:38.112: INFO: successfully validated that service multi-endpoint-test in namespace services-3891 exposes endpoints map[pod1:[100] pod2:[101]] +STEP: Checking if the Service forwards traffic to pods +Aug 3 07:34:38.112: INFO: Creating new exec pod +Aug 3 07:34:43.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-3891 exec execpod2ltk8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' +Aug 3 07:34:43.457: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" +Aug 3 07:34:43.457: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 07:34:43.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-3891 exec execpod2ltk8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.190.3 80' +Aug 3 07:34:43.781: INFO: stderr: "+ + echonc -v hostName -t\n -w 2 172.31.190.3 80\nConnection to 172.31.190.3 80 port [tcp/http] succeeded!\n" +Aug 3 07:34:43.781: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 07:34:43.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-3891 exec execpod2ltk8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' +Aug 3 07:34:44.069: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" +Aug 3 07:34:44.069: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 07:34:44.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-3891 exec execpod2ltk8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.190.3 81' +Aug 3 07:34:44.394: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.190.3 81\nConnection to 172.31.190.3 81 port [tcp/*] succeeded!\n" +Aug 3 07:34:44.394: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod1 in namespace services-3891 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3891 to expose endpoints map[pod2:[101]] +Aug 3 07:34:44.461: INFO: successfully validated that service multi-endpoint-test in namespace services-3891 exposes endpoints map[pod2:[101]] +STEP: Deleting pod pod2 in namespace services-3891 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3891 to expose endpoints map[] +Aug 3 07:34:44.525: INFO: successfully validated that service multi-endpoint-test in namespace services-3891 exposes endpoints map[] +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:34:44.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-3891" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:16.808 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should serve multiport endpoints from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":346,"completed":201,"skipped":3886,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:34:44.659: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename proxy +STEP: Waiting for a default service account to be provisioned in namespace +[It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:34:44.787: INFO: Creating pod... +Aug 3 07:34:44.844: INFO: Pod Quantity: 1 Status: Pending +Aug 3 07:34:45.868: INFO: Pod Quantity: 1 Status: Pending +Aug 3 07:34:46.858: INFO: Pod Quantity: 1 Status: Pending +Aug 3 07:34:47.854: INFO: Pod Quantity: 1 Status: Pending +Aug 3 07:34:48.856: INFO: Pod Status: Running +Aug 3 07:34:48.856: INFO: Creating service... +Aug 3 07:34:48.871: INFO: Starting http.Client for https://172.31.0.1:443/api/v1/namespaces/proxy-7507/pods/agnhost/proxy/some/path/with/DELETE +Aug 3 07:34:48.883: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Aug 3 07:34:48.883: INFO: Starting http.Client for https://172.31.0.1:443/api/v1/namespaces/proxy-7507/pods/agnhost/proxy/some/path/with/GET +Aug 3 07:34:48.890: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Aug 3 07:34:48.890: INFO: Starting http.Client for https://172.31.0.1:443/api/v1/namespaces/proxy-7507/pods/agnhost/proxy/some/path/with/HEAD +Aug 3 07:34:48.900: INFO: http.Client request:HEAD | StatusCode:200 +Aug 3 07:34:48.900: INFO: Starting http.Client for https://172.31.0.1:443/api/v1/namespaces/proxy-7507/pods/agnhost/proxy/some/path/with/OPTIONS +Aug 3 07:34:48.909: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Aug 3 07:34:48.909: INFO: Starting http.Client for https://172.31.0.1:443/api/v1/namespaces/proxy-7507/pods/agnhost/proxy/some/path/with/PATCH +Aug 3 07:34:48.915: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Aug 3 07:34:48.916: INFO: Starting http.Client for https://172.31.0.1:443/api/v1/namespaces/proxy-7507/pods/agnhost/proxy/some/path/with/POST +Aug 3 07:34:48.921: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Aug 3 07:34:48.921: INFO: Starting http.Client for https://172.31.0.1:443/api/v1/namespaces/proxy-7507/pods/agnhost/proxy/some/path/with/PUT +Aug 3 07:34:48.928: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Aug 3 07:34:48.928: INFO: Starting http.Client for https://172.31.0.1:443/api/v1/namespaces/proxy-7507/services/test-service/proxy/some/path/with/DELETE +Aug 3 07:34:48.937: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Aug 3 07:34:48.937: INFO: Starting http.Client for https://172.31.0.1:443/api/v1/namespaces/proxy-7507/services/test-service/proxy/some/path/with/GET +Aug 3 07:34:48.945: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Aug 3 07:34:48.945: INFO: Starting http.Client for https://172.31.0.1:443/api/v1/namespaces/proxy-7507/services/test-service/proxy/some/path/with/HEAD +Aug 3 07:34:48.953: INFO: http.Client request:HEAD | StatusCode:200 +Aug 3 07:34:48.953: INFO: Starting http.Client for https://172.31.0.1:443/api/v1/namespaces/proxy-7507/services/test-service/proxy/some/path/with/OPTIONS +Aug 3 07:34:48.962: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Aug 3 07:34:48.962: INFO: Starting http.Client for https://172.31.0.1:443/api/v1/namespaces/proxy-7507/services/test-service/proxy/some/path/with/PATCH +Aug 3 07:34:48.969: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Aug 3 07:34:48.969: INFO: Starting http.Client for https://172.31.0.1:443/api/v1/namespaces/proxy-7507/services/test-service/proxy/some/path/with/POST +Aug 3 07:34:48.987: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Aug 3 07:34:48.987: INFO: Starting http.Client for https://172.31.0.1:443/api/v1/namespaces/proxy-7507/services/test-service/proxy/some/path/with/PUT +Aug 3 07:34:49.001: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +[AfterEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:34:49.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-7507" for this suite. +•{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":346,"completed":202,"skipped":3901,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:34:49.025: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test override command +Aug 3 07:34:49.121: INFO: Waiting up to 5m0s for pod "client-containers-f3cfea84-a819-40d9-82cf-1d43da1570f1" in namespace "containers-1718" to be "Succeeded or Failed" +Aug 3 07:34:49.128: INFO: Pod "client-containers-f3cfea84-a819-40d9-82cf-1d43da1570f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.560169ms +Aug 3 07:34:51.141: INFO: Pod "client-containers-f3cfea84-a819-40d9-82cf-1d43da1570f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019440469s +Aug 3 07:34:53.153: INFO: Pod "client-containers-f3cfea84-a819-40d9-82cf-1d43da1570f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031978908s +STEP: Saw pod success +Aug 3 07:34:53.154: INFO: Pod "client-containers-f3cfea84-a819-40d9-82cf-1d43da1570f1" satisfied condition "Succeeded or Failed" +Aug 3 07:34:53.160: INFO: Trying to get logs from node dce-10-6-213-50 pod client-containers-f3cfea84-a819-40d9-82cf-1d43da1570f1 container agnhost-container: +STEP: delete the pod +Aug 3 07:34:53.210: INFO: Waiting for pod client-containers-f3cfea84-a819-40d9-82cf-1d43da1570f1 to disappear +Aug 3 07:34:53.215: INFO: Pod client-containers-f3cfea84-a819-40d9-82cf-1d43da1570f1 no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:34:53.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-1718" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":346,"completed":203,"skipped":3936,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:34:53.236: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-map-d464c4e7-dfed-4e19-bc9b-bb4bc65e8aec +STEP: Creating a pod to test consume configMaps +Aug 3 07:34:53.307: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-80bde485-2d03-454b-a13c-c0e00f943ec2" in namespace "projected-3542" to be "Succeeded or Failed" +Aug 3 07:34:53.316: INFO: Pod "pod-projected-configmaps-80bde485-2d03-454b-a13c-c0e00f943ec2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.72848ms +Aug 3 07:34:55.326: INFO: Pod "pod-projected-configmaps-80bde485-2d03-454b-a13c-c0e00f943ec2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018396004s +Aug 3 07:34:57.341: INFO: Pod "pod-projected-configmaps-80bde485-2d03-454b-a13c-c0e00f943ec2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033587046s +Aug 3 07:34:59.362: INFO: Pod "pod-projected-configmaps-80bde485-2d03-454b-a13c-c0e00f943ec2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054606671s +STEP: Saw pod success +Aug 3 07:34:59.362: INFO: Pod "pod-projected-configmaps-80bde485-2d03-454b-a13c-c0e00f943ec2" satisfied condition "Succeeded or Failed" +Aug 3 07:34:59.367: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-projected-configmaps-80bde485-2d03-454b-a13c-c0e00f943ec2 container agnhost-container: +STEP: delete the pod +Aug 3 07:34:59.405: INFO: Waiting for pod pod-projected-configmaps-80bde485-2d03-454b-a13c-c0e00f943ec2 to disappear +Aug 3 07:34:59.417: INFO: Pod pod-projected-configmaps-80bde485-2d03-454b-a13c-c0e00f943ec2 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:34:59.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3542" for this suite. + +• [SLOW TEST:6.205 seconds] +[sig-storage] Projected configMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":204,"skipped":3954,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:34:59.441: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-b759193b-09a7-46ad-8325-e6f6a8ac0946 +STEP: Creating a pod to test consume configMaps +Aug 3 07:34:59.631: INFO: Waiting up to 5m0s for pod "pod-configmaps-fd553bbd-a073-4d11-a0cf-f24903f4082d" in namespace "configmap-7816" to be "Succeeded or Failed" +Aug 3 07:34:59.642: INFO: Pod "pod-configmaps-fd553bbd-a073-4d11-a0cf-f24903f4082d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.820756ms +Aug 3 07:35:01.656: INFO: Pod "pod-configmaps-fd553bbd-a073-4d11-a0cf-f24903f4082d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025007241s +Aug 3 07:35:03.674: INFO: Pod "pod-configmaps-fd553bbd-a073-4d11-a0cf-f24903f4082d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042763318s +Aug 3 07:35:05.689: INFO: Pod "pod-configmaps-fd553bbd-a073-4d11-a0cf-f24903f4082d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.057307226s +STEP: Saw pod success +Aug 3 07:35:05.689: INFO: Pod "pod-configmaps-fd553bbd-a073-4d11-a0cf-f24903f4082d" satisfied condition "Succeeded or Failed" +Aug 3 07:35:05.695: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-configmaps-fd553bbd-a073-4d11-a0cf-f24903f4082d container agnhost-container: +STEP: delete the pod +Aug 3 07:35:05.725: INFO: Waiting for pod pod-configmaps-fd553bbd-a073-4d11-a0cf-f24903f4082d to disappear +Aug 3 07:35:05.731: INFO: Pod pod-configmaps-fd553bbd-a073-4d11-a0cf-f24903f4082d no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:35:05.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-7816" for this suite. + +• [SLOW TEST:6.306 seconds] +[sig-storage] ConfigMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":205,"skipped":3962,"failed":0} +S +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:35:05.748: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Aug 3 07:35:05.809: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Aug 3 07:35:05.825: INFO: Waiting for terminating namespaces to be deleted... +Aug 3 07:35:05.830: INFO: +Logging pods the apiserver thinks is on node dce-10-6-213-40 before test +Aug 3 07:35:05.842: INFO: dce-system-dnsservice-5fd54fd444-4b57d from dce-system started at 2022-08-03 03:54:34 +0000 UTC (1 container statuses recorded) +Aug 3 07:35:05.842: INFO: Container dce-system-dnsservice ready: true, restart count 0 +Aug 3 07:35:05.842: INFO: calico-node-ftbqq from kube-system started at 2022-08-01 07:26:41 +0000 UTC (1 container statuses recorded) +Aug 3 07:35:05.842: INFO: Container calico-node ready: true, restart count 0 +Aug 3 07:35:05.842: INFO: coredns-coredns-6b6c46d8b7-5dgzm from kube-system started at 2022-08-02 09:40:48 +0000 UTC (1 container statuses recorded) +Aug 3 07:35:05.842: INFO: Container coredns ready: true, restart count 0 +Aug 3 07:35:05.842: INFO: coredns-coredns-6b6c46d8b7-tb89f from kube-system started at 2022-08-02 09:40:48 +0000 UTC (1 container statuses recorded) +Aug 3 07:35:05.842: INFO: Container coredns ready: true, restart count 0 +Aug 3 07:35:05.842: INFO: dce-engine-htt6p from kube-system started at 2022-08-01 07:26:41 +0000 UTC (1 container statuses recorded) +Aug 3 07:35:05.842: INFO: Container dce-engine ready: true, restart count 0 +Aug 3 07:35:05.842: INFO: dce-kube-apiserver-proxy-dce-10-6-213-40 from kube-system started at 2022-08-01 07:26:27 +0000 UTC (1 container statuses recorded) +Aug 3 07:35:05.842: INFO: Container dce-kube-apiserver-proxy ready: true, restart count 0 +Aug 3 07:35:05.842: INFO: dce-parcel-agent-5xx9x from kube-system started at 2022-08-01 07:26:41 +0000 UTC (1 container statuses recorded) +Aug 3 07:35:05.842: INFO: Container dce-parcel-agent ready: true, restart count 1 +Aug 3 07:35:05.842: INFO: dce-uds-host-driver-2w76c from kube-system started at 2022-08-02 09:36:09 +0000 UTC (2 container statuses recorded) +Aug 3 07:35:05.842: INFO: Container dce-uds-csi-driver-prober ready: true, restart count 0 +Aug 3 07:35:05.842: INFO: Container metrics-collector ready: true, restart count 0 +Aug 3 07:35:05.842: INFO: dce-uds-policy-controller-6f4848f45d-8jhgc from kube-system started at 2022-08-02 09:40:48 +0000 UTC (1 container statuses recorded) +Aug 3 07:35:05.842: INFO: Container dce-uds-policy-controller ready: true, restart count 0 +Aug 3 07:35:05.842: INFO: dce-uds-snapshot-controller-7b76dc77c9-5tkg8 from kube-system started at 2022-08-02 09:40:48 +0000 UTC (1 container statuses recorded) +Aug 3 07:35:05.842: INFO: Container snapshotter ready: true, restart count 2 +Aug 3 07:35:05.842: INFO: kube-proxy-fpf4g from kube-system started at 2022-08-01 07:26:41 +0000 UTC (1 container statuses recorded) +Aug 3 07:35:05.842: INFO: Container kube-proxy ready: true, restart count 0 +Aug 3 07:35:05.842: INFO: metrics-server-55db7974f8-2jq52 from kube-system started at 2022-08-02 09:40:49 +0000 UTC (2 container statuses recorded) +Aug 3 07:35:05.842: INFO: Container metrics-server ready: true, restart count 0 +Aug 3 07:35:05.842: INFO: Container metrics-server-nanny ready: true, restart count 0 +Aug 3 07:35:05.842: INFO: node-local-dns-c7shk from kube-system started at 2022-08-02 07:46:48 +0000 UTC (1 container statuses recorded) +Aug 3 07:35:05.842: INFO: Container node-cache ready: true, restart count 0 +Aug 3 07:35:05.842: INFO: sonobuoy-systemd-logs-daemon-set-10147ad5bf5a4ba1-xplgl from sonobuoy started at 2022-08-03 06:16:15 +0000 UTC (2 container statuses recorded) +Aug 3 07:35:05.842: INFO: Container sonobuoy-worker ready: true, restart count 0 +Aug 3 07:35:05.842: INFO: Container systemd-logs ready: true, restart count 0 +Aug 3 07:35:05.842: INFO: +Logging pods the apiserver thinks is on node dce-10-6-213-50 before test +Aug 3 07:35:05.856: INFO: calico-node-s6xjf from kube-system started at 2022-08-01 07:26:47 +0000 UTC (1 container statuses recorded) +Aug 3 07:35:05.856: INFO: Container calico-node ready: true, restart count 0 +Aug 3 07:35:05.856: INFO: dce-engine-6d4wp from kube-system started at 2022-08-01 07:26:47 +0000 UTC (1 container statuses recorded) +Aug 3 07:35:05.856: INFO: Container dce-engine ready: true, restart count 0 +Aug 3 07:35:05.856: INFO: dce-kube-apiserver-proxy-dce-10-6-213-50 from kube-system started at 2022-08-01 07:26:33 +0000 UTC (1 container statuses recorded) +Aug 3 07:35:05.856: INFO: Container dce-kube-apiserver-proxy ready: true, restart count 0 +Aug 3 07:35:05.856: INFO: dce-parcel-agent-t4d24 from kube-system started at 2022-08-01 07:26:47 +0000 UTC (1 container statuses recorded) +Aug 3 07:35:05.856: INFO: Container dce-parcel-agent ready: true, restart count 0 +Aug 3 07:35:05.856: INFO: dce-uds-host-driver-nqcxc from kube-system started at 2022-08-02 09:40:52 +0000 UTC (2 container statuses recorded) +Aug 3 07:35:05.856: INFO: Container dce-uds-csi-driver-prober ready: true, restart count 0 +Aug 3 07:35:05.856: INFO: Container metrics-collector ready: true, restart count 0 +Aug 3 07:35:05.856: INFO: kube-proxy-j6g24 from kube-system started at 2022-08-01 07:26:47 +0000 UTC (1 container statuses recorded) +Aug 3 07:35:05.856: INFO: Container kube-proxy ready: true, restart count 0 +Aug 3 07:35:05.856: INFO: node-local-dns-vpkp7 from kube-system started at 2022-08-03 07:30:26 +0000 UTC (1 container statuses recorded) +Aug 3 07:35:05.856: INFO: Container node-cache ready: true, restart count 0 +Aug 3 07:35:05.856: INFO: sonobuoy from sonobuoy started at 2022-08-03 06:16:12 +0000 UTC (1 container statuses recorded) +Aug 3 07:35:05.856: INFO: Container kube-sonobuoy ready: true, restart count 0 +Aug 3 07:35:05.856: INFO: sonobuoy-e2e-job-eb6a0f3fa9794033 from sonobuoy started at 2022-08-03 06:16:15 +0000 UTC (2 container statuses recorded) +Aug 3 07:35:05.856: INFO: Container e2e ready: true, restart count 0 +Aug 3 07:35:05.856: INFO: Container sonobuoy-worker ready: true, restart count 0 +Aug 3 07:35:05.856: INFO: sonobuoy-systemd-logs-daemon-set-10147ad5bf5a4ba1-gxfgs from sonobuoy started at 2022-08-03 06:16:15 +0000 UTC (2 container statuses recorded) +Aug 3 07:35:05.856: INFO: Container sonobuoy-worker ready: true, restart count 0 +Aug 3 07:35:05.856: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: verifying the node has the label node dce-10-6-213-40 +STEP: verifying the node has the label node dce-10-6-213-50 +Aug 3 07:35:06.020: INFO: Pod dce-system-dnsservice-5fd54fd444-4b57d requesting resource cpu=300m on Node dce-10-6-213-40 +Aug 3 07:35:06.020: INFO: Pod calico-node-ftbqq requesting resource cpu=200m on Node dce-10-6-213-40 +Aug 3 07:35:06.020: INFO: Pod calico-node-s6xjf requesting resource cpu=200m on Node dce-10-6-213-50 +Aug 3 07:35:06.020: INFO: Pod coredns-coredns-6b6c46d8b7-5dgzm requesting resource cpu=500m on Node dce-10-6-213-40 +Aug 3 07:35:06.020: INFO: Pod coredns-coredns-6b6c46d8b7-tb89f requesting resource cpu=500m on Node dce-10-6-213-40 +Aug 3 07:35:06.020: INFO: Pod dce-engine-6d4wp requesting resource cpu=80m on Node dce-10-6-213-50 +Aug 3 07:35:06.020: INFO: Pod dce-engine-htt6p requesting resource cpu=80m on Node dce-10-6-213-40 +Aug 3 07:35:06.020: INFO: Pod dce-kube-apiserver-proxy-dce-10-6-213-40 requesting resource cpu=100m on Node dce-10-6-213-40 +Aug 3 07:35:06.020: INFO: Pod dce-kube-apiserver-proxy-dce-10-6-213-50 requesting resource cpu=100m on Node dce-10-6-213-50 +Aug 3 07:35:06.020: INFO: Pod dce-parcel-agent-5xx9x requesting resource cpu=200m on Node dce-10-6-213-40 +Aug 3 07:35:06.020: INFO: Pod dce-parcel-agent-t4d24 requesting resource cpu=200m on Node dce-10-6-213-50 +Aug 3 07:35:06.020: INFO: Pod dce-uds-host-driver-2w76c requesting resource cpu=100m on Node dce-10-6-213-40 +Aug 3 07:35:06.020: INFO: Pod dce-uds-host-driver-nqcxc requesting resource cpu=100m on Node dce-10-6-213-50 +Aug 3 07:35:06.020: INFO: Pod dce-uds-policy-controller-6f4848f45d-8jhgc requesting resource cpu=100m on Node dce-10-6-213-40 +Aug 3 07:35:06.020: INFO: Pod dce-uds-snapshot-controller-7b76dc77c9-5tkg8 requesting resource cpu=50m on Node dce-10-6-213-40 +Aug 3 07:35:06.020: INFO: Pod kube-proxy-fpf4g requesting resource cpu=100m on Node dce-10-6-213-40 +Aug 3 07:35:06.020: INFO: Pod kube-proxy-j6g24 requesting resource cpu=100m on Node dce-10-6-213-50 +Aug 3 07:35:06.020: INFO: Pod metrics-server-55db7974f8-2jq52 requesting resource cpu=79m on Node dce-10-6-213-40 +Aug 3 07:35:06.020: INFO: Pod node-local-dns-c7shk requesting resource cpu=250m on Node dce-10-6-213-40 +Aug 3 07:35:06.020: INFO: Pod node-local-dns-vpkp7 requesting resource cpu=250m on Node dce-10-6-213-50 +Aug 3 07:35:06.020: INFO: Pod sonobuoy requesting resource cpu=0m on Node dce-10-6-213-50 +Aug 3 07:35:06.020: INFO: Pod sonobuoy-e2e-job-eb6a0f3fa9794033 requesting resource cpu=0m on Node dce-10-6-213-50 +Aug 3 07:35:06.020: INFO: Pod sonobuoy-systemd-logs-daemon-set-10147ad5bf5a4ba1-gxfgs requesting resource cpu=0m on Node dce-10-6-213-50 +Aug 3 07:35:06.020: INFO: Pod sonobuoy-systemd-logs-daemon-set-10147ad5bf5a4ba1-xplgl requesting resource cpu=0m on Node dce-10-6-213-40 +STEP: Starting Pods to consume most of the cluster CPU. +Aug 3 07:35:06.020: INFO: Creating a pod which consumes cpu=3458m on Node dce-10-6-213-40 +Aug 3 07:35:06.110: INFO: Creating a pod which consumes cpu=4529m on Node dce-10-6-213-50 +STEP: Creating another pod that requires unavailable amount of CPU. +STEP: Considering event: +Type = [Normal], Name = [filler-pod-6b3200b4-f3d4-41ff-8054-c1ec00993ee0.1707c577d060e3d2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5115/filler-pod-6b3200b4-f3d4-41ff-8054-c1ec00993ee0 to dce-10-6-213-40] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-6b3200b4-f3d4-41ff-8054-c1ec00993ee0.1707c5786f97ec83], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.6" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-6b3200b4-f3d4-41ff-8054-c1ec00993ee0.1707c57871bfa1bf], Reason = [Created], Message = [Created container filler-pod-6b3200b4-f3d4-41ff-8054-c1ec00993ee0] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-6b3200b4-f3d4-41ff-8054-c1ec00993ee0.1707c5787dd88814], Reason = [Started], Message = [Started container filler-pod-6b3200b4-f3d4-41ff-8054-c1ec00993ee0] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-844c07de-1a8c-4b7c-9688-b71fb26bfcff.1707c577d0f8499a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5115/filler-pod-844c07de-1a8c-4b7c-9688-b71fb26bfcff to dce-10-6-213-50] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-844c07de-1a8c-4b7c-9688-b71fb26bfcff.1707c578802cb537], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.6" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-844c07de-1a8c-4b7c-9688-b71fb26bfcff.1707c578833e1791], Reason = [Created], Message = [Created container filler-pod-844c07de-1a8c-4b7c-9688-b71fb26bfcff] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-844c07de-1a8c-4b7c-9688-b71fb26bfcff.1707c5789482ada2], Reason = [Started], Message = [Started container filler-pod-844c07de-1a8c-4b7c-9688-b71fb26bfcff] +STEP: Considering event: +Type = [Warning], Name = [additional-pod.1707c5793aa06c4a], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] +STEP: removing the label node off the node dce-10-6-213-40 +STEP: verifying the node doesn't have the label node +STEP: removing the label node off the node dce-10-6-213-50 +STEP: verifying the node doesn't have the label node +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:35:13.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-5115" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 + +• [SLOW TEST:7.587 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":346,"completed":206,"skipped":3963,"failed":0} +SS +------------------------------ +[sig-storage] Projected downwardAPI + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:35:13.335: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Aug 3 07:35:13.493: INFO: The status of Pod labelsupdatea29bba85-e16e-4ea6-aadc-602443e72406 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:35:15.503: INFO: The status of Pod labelsupdatea29bba85-e16e-4ea6-aadc-602443e72406 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:35:17.506: INFO: The status of Pod labelsupdatea29bba85-e16e-4ea6-aadc-602443e72406 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:35:19.507: INFO: The status of Pod labelsupdatea29bba85-e16e-4ea6-aadc-602443e72406 is Running (Ready = true) +Aug 3 07:35:20.054: INFO: Successfully updated pod "labelsupdatea29bba85-e16e-4ea6-aadc-602443e72406" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:35:22.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3688" for this suite. + +• [SLOW TEST:8.800 seconds] +[sig-storage] Projected downwardAPI +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":207,"skipped":3965,"failed":0} +SSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:35:22.135: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-ad874997-8667-4b19-8dfe-195d7570a0d8 +STEP: Creating a pod to test consume secrets +Aug 3 07:35:22.264: INFO: Waiting up to 5m0s for pod "pod-secrets-2463e079-bf2f-4c22-a4ab-2c06c9264c07" in namespace "secrets-3236" to be "Succeeded or Failed" +Aug 3 07:35:22.274: INFO: Pod "pod-secrets-2463e079-bf2f-4c22-a4ab-2c06c9264c07": Phase="Pending", Reason="", readiness=false. Elapsed: 9.20695ms +Aug 3 07:35:24.285: INFO: Pod "pod-secrets-2463e079-bf2f-4c22-a4ab-2c06c9264c07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020118498s +Aug 3 07:35:26.300: INFO: Pod "pod-secrets-2463e079-bf2f-4c22-a4ab-2c06c9264c07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034997782s +STEP: Saw pod success +Aug 3 07:35:26.300: INFO: Pod "pod-secrets-2463e079-bf2f-4c22-a4ab-2c06c9264c07" satisfied condition "Succeeded or Failed" +Aug 3 07:35:26.306: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-secrets-2463e079-bf2f-4c22-a4ab-2c06c9264c07 container secret-volume-test: +STEP: delete the pod +Aug 3 07:35:26.342: INFO: Waiting for pod pod-secrets-2463e079-bf2f-4c22-a4ab-2c06c9264c07 to disappear +Aug 3 07:35:26.347: INFO: Pod pod-secrets-2463e079-bf2f-4c22-a4ab-2c06c9264c07 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:35:26.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-3236" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":208,"skipped":3968,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:35:26.371: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Pod that fits quota +STEP: Ensuring ResourceQuota status captures the pod usage +STEP: Not allowing a pod to be created that exceeds remaining quota +STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) +STEP: Ensuring a pod cannot update its resource requirements +STEP: Ensuring attempts to update pod resource requirements did not change quota usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:35:39.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-4600" for this suite. + +• [SLOW TEST:13.222 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":346,"completed":209,"skipped":3984,"failed":0} +S +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should list, patch and delete a collection of StatefulSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:35:39.593: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 +STEP: Creating service test in namespace statefulset-2723 +[It] should list, patch and delete a collection of StatefulSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:35:39.687: INFO: Found 0 stateful pods, waiting for 1 +Aug 3 07:35:49.705: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: patching the StatefulSet +Aug 3 07:35:49.750: INFO: Found 1 stateful pods, waiting for 2 +Aug 3 07:35:59.776: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Pending - Ready=false +Aug 3 07:36:09.770: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +Aug 3 07:36:09.770: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true +STEP: Listing all StatefulSets +STEP: Delete all of the StatefulSets +STEP: Verify that StatefulSets have been deleted +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 +Aug 3 07:36:09.815: INFO: Deleting all statefulset in ns statefulset-2723 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:36:09.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-2723" for this suite. + +• [SLOW TEST:30.275 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 + should list, patch and delete a collection of StatefulSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":346,"completed":210,"skipped":3985,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-api-machinery] server version + should find the server version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] server version + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:36:09.868: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename server-version +STEP: Waiting for a default service account to be provisioned in namespace +[It] should find the server version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Request ServerVersion +STEP: Confirm major version +Aug 3 07:36:09.963: INFO: Major version: 1 +STEP: Confirm minor version +Aug 3 07:36:09.963: INFO: cleanMinorVersion: 23 +Aug 3 07:36:09.963: INFO: Minor version: 23 +[AfterEach] [sig-api-machinery] server version + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:36:09.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "server-version-7974" for this suite. +•{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":346,"completed":211,"skipped":3995,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:36:09.987: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-6549 +STEP: changing the ExternalName service to type=ClusterIP +STEP: creating replication controller externalname-service in namespace services-6549 +I0803 07:36:10.147889 21 runners.go:193] Created replication controller with name: externalname-service, namespace: services-6549, replica count: 2 +I0803 07:36:13.199517 21 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 3 07:36:16.200: INFO: Creating new exec pod +I0803 07:36:16.200168 21 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 3 07:36:23.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-6549 exec execpodx7cxj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Aug 3 07:36:23.560: INFO: stderr: "+ + echonc -v -t -w 2 externalname-service 80\n hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Aug 3 07:36:23.560: INFO: stdout: "" +Aug 3 07:36:24.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-6549 exec execpodx7cxj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Aug 3 07:36:24.902: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Aug 3 07:36:24.902: INFO: stdout: "" +Aug 3 07:36:25.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-6549 exec execpodx7cxj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Aug 3 07:36:25.859: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Aug 3 07:36:25.859: INFO: stdout: "" +Aug 3 07:36:26.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-6549 exec execpodx7cxj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Aug 3 07:36:26.880: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Aug 3 07:36:26.880: INFO: stdout: "externalname-service-bbfxk" +Aug 3 07:36:26.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-6549 exec execpodx7cxj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.42.21 80' +Aug 3 07:36:27.135: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.42.21 80\nConnection to 172.31.42.21 80 port [tcp/http] succeeded!\n" +Aug 3 07:36:27.135: INFO: stdout: "" +Aug 3 07:36:28.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-6549 exec execpodx7cxj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.42.21 80' +Aug 3 07:36:28.433: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.42.21 80\nConnection to 172.31.42.21 80 port [tcp/http] succeeded!\n" +Aug 3 07:36:28.433: INFO: stdout: "externalname-service-lz8xl" +Aug 3 07:36:28.433: INFO: Cleaning up the ExternalName to ClusterIP test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:36:28.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6549" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:18.535 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":346,"completed":212,"skipped":4011,"failed":0} +[sig-api-machinery] ResourceQuota + should be able to update and delete ResourceQuota. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:36:28.522: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to update and delete ResourceQuota. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ResourceQuota +STEP: Getting a ResourceQuota +STEP: Updating a ResourceQuota +STEP: Verifying a ResourceQuota was modified +STEP: Deleting a ResourceQuota +STEP: Verifying the deleted ResourceQuota +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:36:28.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-2415" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":346,"completed":213,"skipped":4011,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:36:28.669: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name cm-test-opt-del-f3dd5426-3072-4b25-9748-65652b25f95a +STEP: Creating configMap with name cm-test-opt-upd-067d78b8-4a77-4914-9a75-5c5f64b3de6d +STEP: Creating the pod +Aug 3 07:36:28.780: INFO: The status of Pod pod-configmaps-e93ed15f-ce01-4ce9-b093-cf303831f7ac is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:36:30.797: INFO: The status of Pod pod-configmaps-e93ed15f-ce01-4ce9-b093-cf303831f7ac is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:36:32.791: INFO: The status of Pod pod-configmaps-e93ed15f-ce01-4ce9-b093-cf303831f7ac is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:36:34.797: INFO: The status of Pod pod-configmaps-e93ed15f-ce01-4ce9-b093-cf303831f7ac is Running (Ready = true) +STEP: Deleting configmap cm-test-opt-del-f3dd5426-3072-4b25-9748-65652b25f95a +STEP: Updating configmap cm-test-opt-upd-067d78b8-4a77-4914-9a75-5c5f64b3de6d +STEP: Creating configMap with name cm-test-opt-create-5e5c0391-4c26-4d2d-95a6-07387dcac92c +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:36:37.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-1410" for this suite. + +• [SLOW TEST:8.358 seconds] +[sig-storage] ConfigMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":214,"skipped":4034,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:36:37.029: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a watch on configmaps with label A +STEP: creating a watch on configmaps with label B +STEP: creating a watch on configmaps with label A or B +STEP: creating a configmap with label A and ensuring the correct watchers observe the notification +Aug 3 07:36:37.088: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-137 df9e10d0-1620-41b8-98fa-0d8e1881b591 630672 0 2022-08-03 07:36:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 3 07:36:37.088: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-137 df9e10d0-1620-41b8-98fa-0d8e1881b591 630672 0 2022-08-03 07:36:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A and ensuring the correct watchers observe the notification +Aug 3 07:36:37.099: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-137 df9e10d0-1620-41b8-98fa-0d8e1881b591 630673 0 2022-08-03 07:36:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 3 07:36:37.099: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-137 df9e10d0-1620-41b8-98fa-0d8e1881b591 630673 0 2022-08-03 07:36:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A again and ensuring the correct watchers observe the notification +Aug 3 07:36:37.109: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-137 df9e10d0-1620-41b8-98fa-0d8e1881b591 630674 0 2022-08-03 07:36:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 3 07:36:37.109: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-137 df9e10d0-1620-41b8-98fa-0d8e1881b591 630674 0 2022-08-03 07:36:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap A and ensuring the correct watchers observe the notification +Aug 3 07:36:37.117: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-137 df9e10d0-1620-41b8-98fa-0d8e1881b591 630675 0 2022-08-03 07:36:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 3 07:36:37.117: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-137 df9e10d0-1620-41b8-98fa-0d8e1881b591 630675 0 2022-08-03 07:36:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: creating a configmap with label B and ensuring the correct watchers observe the notification +Aug 3 07:36:37.125: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-137 483635fc-c9f8-4db2-b0de-5f3ba4eff5cb 630676 0 2022-08-03 07:36:37 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 3 07:36:37.126: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-137 483635fc-c9f8-4db2-b0de-5f3ba4eff5cb 630676 0 2022-08-03 07:36:37 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap B and ensuring the correct watchers observe the notification +Aug 3 07:36:47.152: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-137 483635fc-c9f8-4db2-b0de-5f3ba4eff5cb 630741 0 2022-08-03 07:36:37 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 3 07:36:47.152: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-137 483635fc-c9f8-4db2-b0de-5f3ba4eff5cb 630741 0 2022-08-03 07:36:37 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:36:57.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-137" for this suite. + +• [SLOW TEST:20.159 seconds] +[sig-api-machinery] Watchers +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":346,"completed":215,"skipped":4050,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:36:57.188: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4117.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4117.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4117.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4117.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;sleep 1; done + +STEP: creating a pod to probe /etc/hosts +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Aug 3 07:37:03.355: INFO: DNS probes using dns-4117/dns-test-cd3ee087-39a1-4f97-bff4-709769490e55 succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:37:03.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-4117" for this suite. + +• [SLOW TEST:6.223 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":346,"completed":216,"skipped":4094,"failed":0} +SSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:37:03.412: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename sched-preemption +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Aug 3 07:37:03.493: INFO: Waiting up to 1m0s for all nodes to be ready +Aug 3 07:38:03.577: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:38:03.583: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename sched-preemption-path +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 +[It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:38:03.666: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. +Aug 3 07:38:03.673: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. +[AfterEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:38:03.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-path-5186" for this suite. +[AfterEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:38:03.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-2549" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 + +• [SLOW TEST:60.440 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":346,"completed":217,"skipped":4099,"failed":0} +SSSSSS +------------------------------ +[sig-storage] ConfigMap + should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:38:03.853: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:38:04.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-9770" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":346,"completed":218,"skipped":4105,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of different groups [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:38:04.024: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of different groups [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation +Aug 3 07:38:04.090: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 07:38:08.022: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:38:26.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-1881" for this suite. + +• [SLOW TEST:22.209 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of different groups [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":346,"completed":219,"skipped":4119,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:38:26.234: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-6587 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Aug 3 07:38:26.295: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Aug 3 07:38:26.363: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:38:28.390: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:38:30.374: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:38:32.375: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 07:38:34.375: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 07:38:36.379: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 07:38:38.378: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 07:38:40.374: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 07:38:42.378: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 07:38:44.374: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 07:38:46.380: INFO: The status of Pod netserver-0 is Running (Ready = true) +Aug 3 07:38:46.391: INFO: The status of Pod netserver-1 is Running (Ready = false) +Aug 3 07:38:48.403: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Aug 3 07:38:52.472: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Aug 3 07:38:52.472: INFO: Going to poll 172.29.31.102 on port 8081 at least 0 times, with a maximum of 34 tries before failing +Aug 3 07:38:52.480: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.29.31.102 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6587 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 07:38:52.480: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 07:38:52.482: INFO: ExecWithOptions: Clientset creation +Aug 3 07:38:52.482: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/pod-network-test-6587/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+172.29.31.102+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) +Aug 3 07:38:53.671: INFO: Found all 1 expected endpoints: [netserver-0] +Aug 3 07:38:53.671: INFO: Going to poll 172.29.175.14 on port 8081 at least 0 times, with a maximum of 34 tries before failing +Aug 3 07:38:53.682: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.29.175.14 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6587 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 07:38:53.682: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 07:38:53.683: INFO: ExecWithOptions: Clientset creation +Aug 3 07:38:53.683: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/pod-network-test-6587/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+172.29.175.14+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) +Aug 3 07:38:54.827: INFO: Found all 1 expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:38:54.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-6587" for this suite. + +• [SLOW TEST:28.628 seconds] +[sig-network] Networking +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 + Granular Checks: Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":220,"skipped":4135,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Hostname [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:38:54.863: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7589.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7589.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7589.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7589.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Aug 3 07:38:59.014: INFO: DNS probes using dns-7589/dns-test-1ef40853-4cef-4d56-9d41-5dff964662be succeeded + +STEP: deleting the pod +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:38:59.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-7589" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":346,"completed":221,"skipped":4155,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:38:59.136: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Aug 3 07:38:59.259: INFO: Waiting up to 5m0s for pod "downward-api-9f42ed2d-21f8-4b24-b7d9-348d79657f74" in namespace "downward-api-3709" to be "Succeeded or Failed" +Aug 3 07:38:59.267: INFO: Pod "downward-api-9f42ed2d-21f8-4b24-b7d9-348d79657f74": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055386ms +Aug 3 07:39:01.279: INFO: Pod "downward-api-9f42ed2d-21f8-4b24-b7d9-348d79657f74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020063447s +Aug 3 07:39:03.296: INFO: Pod "downward-api-9f42ed2d-21f8-4b24-b7d9-348d79657f74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036582075s +Aug 3 07:39:05.307: INFO: Pod "downward-api-9f42ed2d-21f8-4b24-b7d9-348d79657f74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048535485s +STEP: Saw pod success +Aug 3 07:39:05.308: INFO: Pod "downward-api-9f42ed2d-21f8-4b24-b7d9-348d79657f74" satisfied condition "Succeeded or Failed" +Aug 3 07:39:05.315: INFO: Trying to get logs from node dce-10-6-213-50 pod downward-api-9f42ed2d-21f8-4b24-b7d9-348d79657f74 container dapi-container: +STEP: delete the pod +Aug 3 07:39:05.378: INFO: Waiting for pod downward-api-9f42ed2d-21f8-4b24-b7d9-348d79657f74 to disappear +Aug 3 07:39:05.390: INFO: Pod downward-api-9f42ed2d-21f8-4b24-b7d9-348d79657f74 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:39:05.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3709" for this suite. + +• [SLOW TEST:6.284 seconds] +[sig-node] Downward API +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":346,"completed":222,"skipped":4201,"failed":0} +SS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:39:05.420: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 +STEP: create the container to handle the HTTPGet hook request. +Aug 3 07:39:05.529: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:39:07.541: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:39:09.535: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:39:11.546: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Aug 3 07:39:11.580: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:39:13.598: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:39:15.592: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) +STEP: delete the pod with lifecycle hook +Aug 3 07:39:15.630: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Aug 3 07:39:15.639: INFO: Pod pod-with-prestop-exec-hook still exists +Aug 3 07:39:17.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Aug 3 07:39:17.656: INFO: Pod pod-with-prestop-exec-hook still exists +Aug 3 07:39:19.640: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Aug 3 07:39:19.650: INFO: Pod pod-with-prestop-exec-hook still exists +Aug 3 07:39:21.640: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Aug 3 07:39:21.655: INFO: Pod pod-with-prestop-exec-hook no longer exists +STEP: check prestop hook +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:39:21.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-3842" for this suite. + +• [SLOW TEST:16.290 seconds] +[sig-node] Container Lifecycle Hook +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44 + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":346,"completed":223,"skipped":4203,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:39:21.710: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating secret secrets-1587/secret-test-d3bd2255-d78e-413c-b304-f16d25805b23 +STEP: Creating a pod to test consume secrets +Aug 3 07:39:21.817: INFO: Waiting up to 5m0s for pod "pod-configmaps-a13bdd9e-16e9-4cb4-bcdd-a65296dcb168" in namespace "secrets-1587" to be "Succeeded or Failed" +Aug 3 07:39:21.825: INFO: Pod "pod-configmaps-a13bdd9e-16e9-4cb4-bcdd-a65296dcb168": Phase="Pending", Reason="", readiness=false. Elapsed: 8.429464ms +Aug 3 07:39:23.838: INFO: Pod "pod-configmaps-a13bdd9e-16e9-4cb4-bcdd-a65296dcb168": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021180573s +Aug 3 07:39:25.848: INFO: Pod "pod-configmaps-a13bdd9e-16e9-4cb4-bcdd-a65296dcb168": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030880736s +Aug 3 07:39:27.863: INFO: Pod "pod-configmaps-a13bdd9e-16e9-4cb4-bcdd-a65296dcb168": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045921919s +STEP: Saw pod success +Aug 3 07:39:27.863: INFO: Pod "pod-configmaps-a13bdd9e-16e9-4cb4-bcdd-a65296dcb168" satisfied condition "Succeeded or Failed" +Aug 3 07:39:27.869: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-configmaps-a13bdd9e-16e9-4cb4-bcdd-a65296dcb168 container env-test: +STEP: delete the pod +Aug 3 07:39:27.915: INFO: Waiting for pod pod-configmaps-a13bdd9e-16e9-4cb4-bcdd-a65296dcb168 to disappear +Aug 3 07:39:27.921: INFO: Pod pod-configmaps-a13bdd9e-16e9-4cb4-bcdd-a65296dcb168 no longer exists +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:39:27.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-1587" for this suite. + +• [SLOW TEST:6.234 seconds] +[sig-node] Secrets +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":346,"completed":224,"skipped":4216,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:39:27.945: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name cm-test-opt-del-d3173c99-86cd-42a9-b698-f29906446a12 +STEP: Creating configMap with name cm-test-opt-upd-707e7bd0-f6ec-4de2-a974-dbda14383349 +STEP: Creating the pod +Aug 3 07:39:28.051: INFO: The status of Pod pod-projected-configmaps-1514b88a-e370-4d1c-95d5-7cf81179703c is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:39:30.063: INFO: The status of Pod pod-projected-configmaps-1514b88a-e370-4d1c-95d5-7cf81179703c is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:39:32.064: INFO: The status of Pod pod-projected-configmaps-1514b88a-e370-4d1c-95d5-7cf81179703c is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:39:34.074: INFO: The status of Pod pod-projected-configmaps-1514b88a-e370-4d1c-95d5-7cf81179703c is Running (Ready = true) +STEP: Deleting configmap cm-test-opt-del-d3173c99-86cd-42a9-b698-f29906446a12 +STEP: Updating configmap cm-test-opt-upd-707e7bd0-f6ec-4de2-a974-dbda14383349 +STEP: Creating configMap with name cm-test-opt-create-a328d4d9-440f-4c51-aa22-6f2c4b49e591 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:39:38.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4621" for this suite. + +• [SLOW TEST:10.336 seconds] +[sig-storage] Projected configMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":225,"skipped":4228,"failed":0} +SSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl logs + should be able to retrieve and filter logs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:39:38.281: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl logs + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1409 +STEP: creating an pod +Aug 3 07:39:38.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-2493 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' +Aug 3 07:39:38.554: INFO: stderr: "" +Aug 3 07:39:38.554: INFO: stdout: "pod/logs-generator created\n" +[It] should be able to retrieve and filter logs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for log generator to start. +Aug 3 07:39:38.554: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] +Aug 3 07:39:38.555: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2493" to be "running and ready, or succeeded" +Aug 3 07:39:38.562: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 7.190232ms +Aug 3 07:39:40.573: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018752923s +Aug 3 07:39:42.589: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034180579s +Aug 3 07:39:44.597: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.04271222s +Aug 3 07:39:44.597: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" +Aug 3 07:39:44.597: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] +STEP: checking for a matching strings +Aug 3 07:39:44.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-2493 logs logs-generator logs-generator' +Aug 3 07:39:44.770: INFO: stderr: "" +Aug 3 07:39:44.770: INFO: stdout: "I0803 07:39:41.699066 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/fw6p 585\nI0803 07:39:41.899580 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/vtk5 345\nI0803 07:39:42.100165 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/8f9c 322\nI0803 07:39:42.299657 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/x5l 478\nI0803 07:39:42.500061 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/p25 432\nI0803 07:39:42.699419 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/wg8 483\nI0803 07:39:42.899857 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/2v8 344\nI0803 07:39:43.099160 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/7grx 413\nI0803 07:39:43.299078 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/j97 429\nI0803 07:39:43.499575 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/tvq 453\nI0803 07:39:43.700343 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/hszm 446\nI0803 07:39:43.899797 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/ngv 579\nI0803 07:39:44.099157 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/npn 349\nI0803 07:39:44.299717 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/p7h7 595\nI0803 07:39:44.499156 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/q2tt 481\nI0803 07:39:44.699581 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/gt4 477\n" +STEP: limiting log lines +Aug 3 07:39:44.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-2493 logs logs-generator logs-generator --tail=1' +Aug 3 07:39:44.916: INFO: stderr: "" +Aug 3 07:39:44.916: INFO: stdout: "I0803 07:39:44.900051 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/lp4 291\n" +Aug 3 07:39:44.916: INFO: got output "I0803 07:39:44.900051 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/lp4 291\n" +STEP: limiting log bytes +Aug 3 07:39:44.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-2493 logs logs-generator logs-generator --limit-bytes=1' +Aug 3 07:39:45.053: INFO: stderr: "" +Aug 3 07:39:45.053: INFO: stdout: "I" +Aug 3 07:39:45.053: INFO: got output "I" +STEP: exposing timestamps +Aug 3 07:39:45.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-2493 logs logs-generator logs-generator --tail=1 --timestamps' +Aug 3 07:39:45.183: INFO: stderr: "" +Aug 3 07:39:45.183: INFO: stdout: "2022-08-03T07:39:45.101057193Z I0803 07:39:45.100777 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/f6t 474\n" +Aug 3 07:39:45.183: INFO: got output "2022-08-03T07:39:45.101057193Z I0803 07:39:45.100777 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/f6t 474\n" +STEP: restricting to a time range +Aug 3 07:39:47.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-2493 logs logs-generator logs-generator --since=1s' +Aug 3 07:39:47.858: INFO: stderr: "" +Aug 3 07:39:47.858: INFO: stdout: "I0803 07:39:46.899432 1 logs_generator.go:76] 26 POST /api/v1/namespaces/kube-system/pods/x9fh 457\nI0803 07:39:47.099996 1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/kdj 396\nI0803 07:39:47.299188 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/kube-system/pods/rkh 215\nI0803 07:39:47.499972 1 logs_generator.go:76] 29 POST /api/v1/namespaces/ns/pods/nzd 517\nI0803 07:39:47.701994 1 logs_generator.go:76] 30 GET /api/v1/namespaces/kube-system/pods/zx4p 271\n" +Aug 3 07:39:47.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-2493 logs logs-generator logs-generator --since=24h' +Aug 3 07:39:47.994: INFO: stderr: "" +Aug 3 07:39:47.994: INFO: stdout: "I0803 07:39:41.699066 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/fw6p 585\nI0803 07:39:41.899580 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/vtk5 345\nI0803 07:39:42.100165 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/8f9c 322\nI0803 07:39:42.299657 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/x5l 478\nI0803 07:39:42.500061 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/p25 432\nI0803 07:39:42.699419 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/wg8 483\nI0803 07:39:42.899857 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/2v8 344\nI0803 07:39:43.099160 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/7grx 413\nI0803 07:39:43.299078 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/j97 429\nI0803 07:39:43.499575 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/tvq 453\nI0803 07:39:43.700343 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/hszm 446\nI0803 07:39:43.899797 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/ngv 579\nI0803 07:39:44.099157 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/npn 349\nI0803 07:39:44.299717 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/p7h7 595\nI0803 07:39:44.499156 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/q2tt 481\nI0803 07:39:44.699581 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/gt4 477\nI0803 07:39:44.900051 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/lp4 291\nI0803 07:39:45.100777 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/f6t 474\nI0803 07:39:45.299346 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/cmp5 488\nI0803 07:39:45.499819 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/ml4q 567\nI0803 07:39:45.699190 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/wvml 354\nI0803 07:39:45.901083 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/rr9x 328\nI0803 07:39:46.099233 1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/9mp 359\nI0803 07:39:46.299629 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/pqt 213\nI0803 07:39:46.499779 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/kube-system/pods/wv2p 230\nI0803 07:39:46.699760 1 logs_generator.go:76] 25 POST /api/v1/namespaces/default/pods/ss9 481\nI0803 07:39:46.899432 1 logs_generator.go:76] 26 POST /api/v1/namespaces/kube-system/pods/x9fh 457\nI0803 07:39:47.099996 1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/kdj 396\nI0803 07:39:47.299188 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/kube-system/pods/rkh 215\nI0803 07:39:47.499972 1 logs_generator.go:76] 29 POST /api/v1/namespaces/ns/pods/nzd 517\nI0803 07:39:47.701994 1 logs_generator.go:76] 30 GET /api/v1/namespaces/kube-system/pods/zx4p 271\nI0803 07:39:47.899519 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/kube-system/pods/wcr 468\n" +[AfterEach] Kubectl logs + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 +Aug 3 07:39:47.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-2493 delete pod logs-generator' +Aug 3 07:39:51.171: INFO: stderr: "" +Aug 3 07:39:51.172: INFO: stdout: "pod \"logs-generator\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:39:51.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2493" for this suite. + +• [SLOW TEST:12.936 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Kubectl logs + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1406 + should be able to retrieve and filter logs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":346,"completed":226,"skipped":4234,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:39:51.218: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename sysctl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 +[It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod with one valid and two invalid sysctls +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:39:51.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sysctl-3068" for this suite. +•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":227,"skipped":4260,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Proxy server + should support proxy with --port 0 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:39:51.316: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should support proxy with --port 0 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: starting the proxy server +Aug 3 07:39:51.366: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-1213 proxy -p 0 --disable-filter' +STEP: curling proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:39:51.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-1213" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":346,"completed":228,"skipped":4287,"failed":0} +SSSSSSSSS +------------------------------ +[sig-node] Security Context + should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:39:51.491: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename security-context +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser +Aug 3 07:39:51.582: INFO: Waiting up to 5m0s for pod "security-context-14b7f4bc-0743-4766-b8c1-aaff08d4123f" in namespace "security-context-4088" to be "Succeeded or Failed" +Aug 3 07:39:51.606: INFO: Pod "security-context-14b7f4bc-0743-4766-b8c1-aaff08d4123f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.935136ms +Aug 3 07:39:53.621: INFO: Pod "security-context-14b7f4bc-0743-4766-b8c1-aaff08d4123f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038868353s +Aug 3 07:39:55.632: INFO: Pod "security-context-14b7f4bc-0743-4766-b8c1-aaff08d4123f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05024066s +STEP: Saw pod success +Aug 3 07:39:55.632: INFO: Pod "security-context-14b7f4bc-0743-4766-b8c1-aaff08d4123f" satisfied condition "Succeeded or Failed" +Aug 3 07:39:55.640: INFO: Trying to get logs from node dce-10-6-213-50 pod security-context-14b7f4bc-0743-4766-b8c1-aaff08d4123f container test-container: +STEP: delete the pod +Aug 3 07:39:55.687: INFO: Waiting for pod security-context-14b7f4bc-0743-4766-b8c1-aaff08d4123f to disappear +Aug 3 07:39:55.692: INFO: Pod security-context-14b7f4bc-0743-4766-b8c1-aaff08d4123f no longer exists +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:39:55.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-4088" for this suite. +•{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":346,"completed":229,"skipped":4296,"failed":0} +SSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:39:55.713: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8598.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8598.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8598.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8598.svc.cluster.local;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8598.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8598.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8598.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8598.svc.cluster.local;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Aug 3 07:40:01.879: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:01.886: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:01.894: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:01.900: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:01.906: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:01.916: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:01.923: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:01.932: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:01.932: INFO: Lookups using dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8598.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8598.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local jessie_udp@dns-test-service-2.dns-8598.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8598.svc.cluster.local] + +Aug 3 07:40:06.940: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:06.947: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:06.953: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:06.959: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:06.964: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:06.971: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:06.980: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:06.986: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:06.986: INFO: Lookups using dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8598.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8598.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local jessie_udp@dns-test-service-2.dns-8598.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8598.svc.cluster.local] + +Aug 3 07:40:11.943: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:11.952: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:11.960: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:11.969: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:11.976: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:11.990: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:12.000: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:12.009: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:12.009: INFO: Lookups using dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8598.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8598.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local jessie_udp@dns-test-service-2.dns-8598.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8598.svc.cluster.local] + +Aug 3 07:40:16.941: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:16.947: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:16.954: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:16.960: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:16.965: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:16.970: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:16.975: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:16.982: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:16.982: INFO: Lookups using dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8598.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8598.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local jessie_udp@dns-test-service-2.dns-8598.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8598.svc.cluster.local] + +Aug 3 07:40:21.941: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:21.948: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:21.955: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:21.962: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:21.969: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:21.976: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:21.982: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:21.990: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:21.990: INFO: Lookups using dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8598.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8598.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local jessie_udp@dns-test-service-2.dns-8598.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8598.svc.cluster.local] + +Aug 3 07:40:26.947: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:26.953: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:26.959: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:26.964: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:26.971: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:26.979: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:26.987: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:26.993: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:26.993: INFO: Lookups using dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8598.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8598.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local jessie_udp@dns-test-service-2.dns-8598.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8598.svc.cluster.local] + +Aug 3 07:40:31.940: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:31.946: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:31.951: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:31.957: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:31.962: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:31.968: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:31.974: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:31.980: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8598.svc.cluster.local from pod dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848: the server could not find the requested resource (get pods dns-test-0201069b-223b-4b14-bd51-5c1b46f91848) +Aug 3 07:40:31.980: INFO: Lookups using dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8598.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8598.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8598.svc.cluster.local jessie_udp@dns-test-service-2.dns-8598.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8598.svc.cluster.local] + +Aug 3 07:40:37.002: INFO: DNS probes using dns-8598/dns-test-0201069b-223b-4b14-bd51-5c1b46f91848 succeeded + +STEP: deleting the pod +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:40:37.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-8598" for this suite. + +• [SLOW TEST:41.418 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":346,"completed":230,"skipped":4304,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing validating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:40:37.132: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 3 07:40:37.842: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Aug 3 07:40:39.866: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 40, 37, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 40, 37, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 40, 37, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 40, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 07:40:41.879: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 40, 37, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 40, 37, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 40, 37, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 40, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 3 07:40:44.903: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing validating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Listing all of the created validation webhooks +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Deleting the collection of validation webhooks +STEP: Creating a configMap that does not comply to the validation webhook rules +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:40:45.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-7572" for this suite. +STEP: Destroying namespace "webhook-7572-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:8.214 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + listing validating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":346,"completed":231,"skipped":4311,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-node] Docker Containers + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:40:45.346: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:40:51.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-4031" for this suite. + +• [SLOW TEST:6.149 seconds] +[sig-node] Docker Containers +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":346,"completed":232,"skipped":4322,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:40:51.496: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 +STEP: Creating service test in namespace statefulset-9088 +[It] should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a new StatefulSet +Aug 3 07:40:51.602: INFO: Found 0 stateful pods, waiting for 3 +Aug 3 07:41:01.621: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Aug 3 07:41:01.621: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Aug 3 07:41:01.621: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false +Aug 3 07:41:11.615: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Aug 3 07:41:11.615: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Aug 3 07:41:11.615: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +Aug 3 07:41:11.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=statefulset-9088 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Aug 3 07:41:11.929: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Aug 3 07:41:11.929: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Aug 3 07:41:11.929: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 +Aug 3 07:41:21.995: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Updating Pods in reverse ordinal order +Aug 3 07:41:32.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=statefulset-9088 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Aug 3 07:41:32.300: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Aug 3 07:41:32.300: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Aug 3 07:41:32.300: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Aug 3 07:41:42.342: INFO: Waiting for StatefulSet statefulset-9088/ss2 to complete update +Aug 3 07:41:42.342: INFO: Waiting for Pod statefulset-9088/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb +Aug 3 07:41:42.342: INFO: Waiting for Pod statefulset-9088/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb +Aug 3 07:41:52.369: INFO: Waiting for StatefulSet statefulset-9088/ss2 to complete update +STEP: Rolling back to a previous revision +Aug 3 07:42:02.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=statefulset-9088 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Aug 3 07:42:02.649: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Aug 3 07:42:02.649: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Aug 3 07:42:02.649: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Aug 3 07:42:12.712: INFO: Updating stateful set ss2 +STEP: Rolling back update in reverse ordinal order +Aug 3 07:42:22.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=statefulset-9088 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Aug 3 07:42:23.002: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Aug 3 07:42:23.002: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Aug 3 07:42:23.002: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Aug 3 07:42:43.063: INFO: Waiting for StatefulSet statefulset-9088/ss2 to complete update +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 +Aug 3 07:42:53.085: INFO: Deleting all statefulset in ns statefulset-9088 +Aug 3 07:42:53.094: INFO: Scaling statefulset ss2 to 0 +Aug 3 07:43:03.142: INFO: Waiting for statefulset status.replicas updated to 0 +Aug 3 07:43:03.150: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:43:03.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-9088" for this suite. + +• [SLOW TEST:131.723 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":346,"completed":233,"skipped":4343,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:43:03.221: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ReplicationController +STEP: Ensuring resource quota status captures replication controller creation +STEP: Deleting a ReplicationController +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:43:14.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-9653" for this suite. + +• [SLOW TEST:11.198 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":346,"completed":234,"skipped":4380,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should support creating EndpointSlice API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:43:14.420: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename endpointslice +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should support creating EndpointSlice API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/discovery.k8s.io +STEP: getting /apis/discovery.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Aug 3 07:43:14.544: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Aug 3 07:43:14.553: INFO: starting watch +STEP: patching +STEP: updating +Aug 3 07:43:14.590: INFO: waiting for watch events with expected annotations +Aug 3 07:43:14.590: INFO: saw patched and updated annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:43:14.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-1058" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":346,"completed":235,"skipped":4398,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:43:14.669: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:43:21.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-8508" for this suite. + +• [SLOW TEST:7.155 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":346,"completed":236,"skipped":4427,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should succeed in writing subpaths in container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:43:21.824: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should succeed in writing subpaths in container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: waiting for pod running +STEP: creating a file in subpath +Aug 3 07:43:27.961: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-4613 PodName:var-expansion-a3c826e0-73f1-4e19-8ff9-a922cbfe1bb4 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 07:43:27.961: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 07:43:27.961: INFO: ExecWithOptions: Clientset creation +Aug 3 07:43:27.962: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/var-expansion-4613/pods/var-expansion-a3c826e0-73f1-4e19-8ff9-a922cbfe1bb4/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true %!s(MISSING)) +STEP: test for file in mounted path +Aug 3 07:43:28.132: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-4613 PodName:var-expansion-a3c826e0-73f1-4e19-8ff9-a922cbfe1bb4 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 07:43:28.132: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 07:43:28.133: INFO: ExecWithOptions: Clientset creation +Aug 3 07:43:28.134: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/var-expansion-4613/pods/var-expansion-a3c826e0-73f1-4e19-8ff9-a922cbfe1bb4/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true %!s(MISSING)) +STEP: updating the annotation value +Aug 3 07:43:28.826: INFO: Successfully updated pod "var-expansion-a3c826e0-73f1-4e19-8ff9-a922cbfe1bb4" +STEP: waiting for annotated pod running +STEP: deleting the pod gracefully +Aug 3 07:43:28.837: INFO: Deleting pod "var-expansion-a3c826e0-73f1-4e19-8ff9-a922cbfe1bb4" in namespace "var-expansion-4613" +Aug 3 07:43:28.847: INFO: Wait up to 5m0s for pod "var-expansion-a3c826e0-73f1-4e19-8ff9-a922cbfe1bb4" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:44:02.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-4613" for this suite. + +• [SLOW TEST:41.072 seconds] +[sig-node] Variable Expansion +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should succeed in writing subpaths in container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":346,"completed":237,"skipped":4459,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl api-versions + should check if v1 is in available api versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:44:02.897: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if v1 is in available api versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: validating api versions +Aug 3 07:44:02.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-9486 api-versions' +Aug 3 07:44:03.082: INFO: stderr: "" +Aug 3 07:44:03.082: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncoordination.k8s.io/v1\ndce.daocloud.io/v1beta1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta2\nnetworking.k8s.io/v1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nscheduling.k8s.io/v1\nsnapshot.storage.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nuds.dce.daocloud.io/v1\nuds.dce.daocloud.io/v1alpha1\nv1\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:44:03.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-9486" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":346,"completed":238,"skipped":4483,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:44:03.103: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-92520c6a-9247-4724-89fa-a6b818e1cf2f +STEP: Creating a pod to test consume secrets +Aug 3 07:44:03.211: INFO: Waiting up to 5m0s for pod "pod-secrets-2443edb6-2c7d-4b88-a23b-e37dc33c1c00" in namespace "secrets-5540" to be "Succeeded or Failed" +Aug 3 07:44:03.217: INFO: Pod "pod-secrets-2443edb6-2c7d-4b88-a23b-e37dc33c1c00": Phase="Pending", Reason="", readiness=false. Elapsed: 6.392818ms +Aug 3 07:44:05.230: INFO: Pod "pod-secrets-2443edb6-2c7d-4b88-a23b-e37dc33c1c00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019024717s +Aug 3 07:44:07.247: INFO: Pod "pod-secrets-2443edb6-2c7d-4b88-a23b-e37dc33c1c00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036066235s +Aug 3 07:44:09.260: INFO: Pod "pod-secrets-2443edb6-2c7d-4b88-a23b-e37dc33c1c00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048922227s +STEP: Saw pod success +Aug 3 07:44:09.260: INFO: Pod "pod-secrets-2443edb6-2c7d-4b88-a23b-e37dc33c1c00" satisfied condition "Succeeded or Failed" +Aug 3 07:44:09.266: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-secrets-2443edb6-2c7d-4b88-a23b-e37dc33c1c00 container secret-volume-test: +STEP: delete the pod +Aug 3 07:44:09.330: INFO: Waiting for pod pod-secrets-2443edb6-2c7d-4b88-a23b-e37dc33c1c00 to disappear +Aug 3 07:44:09.340: INFO: Pod pod-secrets-2443edb6-2c7d-4b88-a23b-e37dc33c1c00 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:44:09.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-5540" for this suite. + +• [SLOW TEST:6.261 seconds] +[sig-storage] Secrets +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":239,"skipped":4507,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:44:09.364: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:44:09.454: INFO: Creating deployment "test-recreate-deployment" +Aug 3 07:44:09.475: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 +Aug 3 07:44:09.494: INFO: deployment "test-recreate-deployment" doesn't have the required revision set +Aug 3 07:44:11.519: INFO: Waiting deployment "test-recreate-deployment" to complete +Aug 3 07:44:11.530: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 44, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 44, 9, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 44, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 44, 9, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-7d659f7dc9\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 07:44:13.546: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 44, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 44, 9, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 44, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 44, 9, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-7d659f7dc9\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 07:44:15.541: INFO: Triggering a new rollout for deployment "test-recreate-deployment" +Aug 3 07:44:15.561: INFO: Updating deployment test-recreate-deployment +Aug 3 07:44:15.561: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Aug 3 07:44:15.849: INFO: Deployment "test-recreate-deployment": +&Deployment{ObjectMeta:{test-recreate-deployment deployment-621 07671bd7-f7a2-4042-8908-0c522c2eb321 633697 2 2022-08-03 07:44:09 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00d2a9658 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-08-03 07:44:15 +0000 UTC,LastTransitionTime:2022-08-03 07:44:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5b99bd5487" is progressing.,LastUpdateTime:2022-08-03 07:44:15 +0000 UTC,LastTransitionTime:2022-08-03 07:44:09 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} + +Aug 3 07:44:15.857: INFO: New ReplicaSet "test-recreate-deployment-5b99bd5487" of Deployment "test-recreate-deployment": +&ReplicaSet{ObjectMeta:{test-recreate-deployment-5b99bd5487 deployment-621 a3d9b024-f5ba-4faa-b2fd-0c2f9b3d3c42 633696 1 2022-08-03 07:44:15 +0000 UTC map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 07671bd7-f7a2-4042-8908-0c522c2eb321 0xc004d3f0c7 0xc004d3f0c8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5b99bd5487,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004d3f128 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Aug 3 07:44:15.857: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": +Aug 3 07:44:15.857: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-7d659f7dc9 deployment-621 a3941cc8-fea7-46a5-8f08-59b127ba8f98 633685 2 2022-08-03 07:44:09 +0000 UTC map[name:sample-pod-3 pod-template-hash:7d659f7dc9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 07671bd7-f7a2-4042-8908-0c522c2eb321 0xc004d3f197 0xc004d3f198}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 7d659f7dc9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:7d659f7dc9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004d3f208 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Aug 3 07:44:15.871: INFO: Pod "test-recreate-deployment-5b99bd5487-hcmr9" is not available: +&Pod{ObjectMeta:{test-recreate-deployment-5b99bd5487-hcmr9 test-recreate-deployment-5b99bd5487- deployment-621 8431e91f-73f8-4a74-a286-e85f524ca4d7 633698 0 2022-08-03 07:44:15 +0000 UTC map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5b99bd5487 a3d9b024-f5ba-4faa-b2fd-0c2f9b3d3c42 0xc004d3f697 0xc004d3f698}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jg997,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jg997,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.50,PodIP:,StartTime:2022-08-03 07:44:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:44:15.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-621" for this suite. + +• [SLOW TEST:6.535 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":346,"completed":240,"skipped":4527,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should include webhook resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:44:15.900: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 3 07:44:16.747: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Aug 3 07:44:18.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 44, 16, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 44, 16, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 44, 16, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 44, 16, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 07:44:20.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 44, 16, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 44, 16, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 44, 16, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 44, 16, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 3 07:44:23.840: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should include webhook resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: fetching the /apis discovery document +STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document +STEP: fetching the /apis/admissionregistration.k8s.io discovery document +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document +STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document +STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:44:23.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-7985" for this suite. +STEP: Destroying namespace "webhook-7985-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:8.058 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should include webhook resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":346,"completed":241,"skipped":4567,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should support proportional scaling [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:44:23.960: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should support proportional scaling [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:44:24.025: INFO: Creating deployment "webserver-deployment" +Aug 3 07:44:24.032: INFO: Waiting for observed generation 1 +Aug 3 07:44:26.056: INFO: Waiting for all required pods to come up +Aug 3 07:44:26.068: INFO: Pod name httpd: Found 10 pods out of 10 +STEP: ensuring each pod is running +Aug 3 07:44:34.100: INFO: Waiting for deployment "webserver-deployment" to complete +Aug 3 07:44:34.112: INFO: Updating deployment "webserver-deployment" with a non-existent image +Aug 3 07:44:34.140: INFO: Updating deployment webserver-deployment +Aug 3 07:44:34.140: INFO: Waiting for observed generation 2 +Aug 3 07:44:36.166: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 +Aug 3 07:44:36.178: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 +Aug 3 07:44:36.185: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Aug 3 07:44:36.232: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 +Aug 3 07:44:36.232: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 +Aug 3 07:44:36.239: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Aug 3 07:44:36.254: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas +Aug 3 07:44:36.254: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 +Aug 3 07:44:36.278: INFO: Updating deployment webserver-deployment +Aug 3 07:44:36.278: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas +Aug 3 07:44:36.318: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 +Aug 3 07:44:38.351: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Aug 3 07:44:38.364: INFO: Deployment "webserver-deployment": +&Deployment{ObjectMeta:{webserver-deployment deployment-1987 2d27c4dc-aead-4364-b4c3-795ec74e3953 634181 3 2022-08-03 07:44:24 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003c3a5d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-08-03 07:44:36 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-566f96c878" is progressing.,LastUpdateTime:2022-08-03 07:44:36 +0000 UTC,LastTransitionTime:2022-08-03 07:44:24 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} + +Aug 3 07:44:38.373: INFO: New ReplicaSet "webserver-deployment-566f96c878" of Deployment "webserver-deployment": +&ReplicaSet{ObjectMeta:{webserver-deployment-566f96c878 deployment-1987 a2fdc23f-e875-49fe-8a06-5f5a7aa40302 634177 3 2022-08-03 07:44:34 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 2d27c4dc-aead-4364-b4c3-795ec74e3953 0xc0057f5ed7 0xc0057f5ed8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 566f96c878,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0057f5f58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Aug 3 07:44:38.373: INFO: All old ReplicaSets of Deployment "webserver-deployment": +Aug 3 07:44:38.373: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-5d9fdcc779 deployment-1987 41f600b6-7ec7-48bc-978b-f74681f6d5d9 634162 3 2022-08-03 07:44:24 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 2d27c4dc-aead-4364-b4c3-795ec74e3953 0xc0057f5fb7 0xc0057f5fb8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 5d9fdcc779,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005058018 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} +Aug 3 07:44:38.411: INFO: Pod "webserver-deployment-566f96c878-2w5jl" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-2w5jl webserver-deployment-566f96c878- deployment-1987 46c1980b-a170-45fc-8e7e-148228ac6da7 634156 0 2022-08-03 07:44:36 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 a2fdc23f-e875-49fe-8a06-5f5a7aa40302 0xc003c3a9c7 0xc003c3a9c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x57xj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x57xj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-40,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.40,PodIP:,StartTime:2022-08-03 07:44:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.411: INFO: Pod "webserver-deployment-566f96c878-42j4q" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-42j4q webserver-deployment-566f96c878- deployment-1987 f682b7a3-dec5-44f2-ba7c-5e4acef0278f 634159 0 2022-08-03 07:44:36 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 a2fdc23f-e875-49fe-8a06-5f5a7aa40302 0xc003c3ab80 0xc003c3ab81}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2dnjs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2dnjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.411: INFO: Pod "webserver-deployment-566f96c878-46gwj" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-46gwj webserver-deployment-566f96c878- deployment-1987 12d17a47-5a70-46d3-8e8d-170a665e4e0d 634169 0 2022-08-03 07:44:36 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 a2fdc23f-e875-49fe-8a06-5f5a7aa40302 0xc003c3ace0 0xc003c3ace1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7rtwg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7rtwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.414: INFO: Pod "webserver-deployment-566f96c878-5qgtq" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-5qgtq webserver-deployment-566f96c878- deployment-1987 e81fae27-ea56-478f-94e4-28a2efe25406 634158 0 2022-08-03 07:44:36 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 a2fdc23f-e875-49fe-8a06-5f5a7aa40302 0xc003c3ae40 0xc003c3ae41}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5tfhm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5tfhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.414: INFO: Pod "webserver-deployment-566f96c878-bcrdl" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-bcrdl webserver-deployment-566f96c878- deployment-1987 904fe85b-b5df-4988-8a55-40d68255e984 634165 0 2022-08-03 07:44:36 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 a2fdc23f-e875-49fe-8a06-5f5a7aa40302 0xc003c3afa0 0xc003c3afa1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-n7lsv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n7lsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-40,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.415: INFO: Pod "webserver-deployment-566f96c878-f9mfs" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-f9mfs webserver-deployment-566f96c878- deployment-1987 fb722415-51fa-4d7d-83a0-7d4f8311b9e6 634235 0 2022-08-03 07:44:34 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[cni.projectcalico.org/ipv4pools:["default-ipv4-ippool"] dce.daocloud.io/parcel.egress.burst:0 dce.daocloud.io/parcel.egress.rate:0 dce.daocloud.io/parcel.ingress.burst:0 dce.daocloud.io/parcel.ingress.rate:0] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 a2fdc23f-e875-49fe-8a06-5f5a7aa40302 0xc003c3b100 0xc003c3b101}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2cwtx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2cwtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.50,PodIP:,StartTime:2022-08-03 07:44:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.415: INFO: Pod "webserver-deployment-566f96c878-l8gqv" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-l8gqv webserver-deployment-566f96c878- deployment-1987 c58374d9-a7bb-4fa0-9db2-75325cfbd6bc 634132 0 2022-08-03 07:44:36 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 a2fdc23f-e875-49fe-8a06-5f5a7aa40302 0xc003c3b2b0 0xc003c3b2b1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4c4wf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4c4wf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.415: INFO: Pod "webserver-deployment-566f96c878-lg4s4" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-lg4s4 webserver-deployment-566f96c878- deployment-1987 a796e867-a761-4c06-9bea-ba5a72df5e03 634221 0 2022-08-03 07:44:34 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[cni.projectcalico.org/ipv4pools:["default-ipv4-ippool"] dce.daocloud.io/parcel.egress.burst:0 dce.daocloud.io/parcel.egress.rate:0 dce.daocloud.io/parcel.ingress.burst:0 dce.daocloud.io/parcel.ingress.rate:0] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 a2fdc23f-e875-49fe-8a06-5f5a7aa40302 0xc003c3b410 0xc003c3b411}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lhsmz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lhsmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-40,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.40,PodIP:,StartTime:2022-08-03 07:44:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.415: INFO: Pod "webserver-deployment-566f96c878-pt466" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-pt466 webserver-deployment-566f96c878- deployment-1987 e83ad6e4-6bef-453c-9cfb-393d2ba436b5 634234 0 2022-08-03 07:44:34 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[cni.projectcalico.org/ipv4pools:["default-ipv4-ippool"] dce.daocloud.io/parcel.egress.burst:0 dce.daocloud.io/parcel.egress.rate:0 dce.daocloud.io/parcel.ingress.burst:0 dce.daocloud.io/parcel.ingress.rate:0] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 a2fdc23f-e875-49fe-8a06-5f5a7aa40302 0xc003c3b5c0 0xc003c3b5c1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gj8m2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gj8m2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.50,PodIP:,StartTime:2022-08-03 07:44:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.418: INFO: Pod "webserver-deployment-566f96c878-qhgrl" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-qhgrl webserver-deployment-566f96c878- deployment-1987 a218d548-c2d2-44a8-9616-52255fead970 634147 0 2022-08-03 07:44:36 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 a2fdc23f-e875-49fe-8a06-5f5a7aa40302 0xc003c3b770 0xc003c3b771}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kpqtr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kpqtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-40,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.40,PodIP:,StartTime:2022-08-03 07:44:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.418: INFO: Pod "webserver-deployment-566f96c878-w4rkh" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-w4rkh webserver-deployment-566f96c878- deployment-1987 75487fd8-5fb2-4afc-89ea-4a607e5b2073 634232 0 2022-08-03 07:44:34 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[cni.projectcalico.org/ipv4pools:["default-ipv4-ippool"] dce.daocloud.io/parcel.egress.burst:0 dce.daocloud.io/parcel.egress.rate:0 dce.daocloud.io/parcel.ingress.burst:0 dce.daocloud.io/parcel.ingress.rate:0] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 a2fdc23f-e875-49fe-8a06-5f5a7aa40302 0xc003c3b960 0xc003c3b961}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vmmwg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vmmwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.50,PodIP:,StartTime:2022-08-03 07:44:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.418: INFO: Pod "webserver-deployment-566f96c878-wtjdc" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-wtjdc webserver-deployment-566f96c878- deployment-1987 42e033ee-25ad-40cf-9f50-95ce27682eb9 634219 0 2022-08-03 07:44:34 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[cni.projectcalico.org/ipv4pools:["default-ipv4-ippool"] dce.daocloud.io/parcel.egress.burst:0 dce.daocloud.io/parcel.egress.rate:0 dce.daocloud.io/parcel.ingress.burst:0 dce.daocloud.io/parcel.ingress.rate:0] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 a2fdc23f-e875-49fe-8a06-5f5a7aa40302 0xc003c3bb10 0xc003c3bb11}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wt29v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wt29v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-40,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.40,PodIP:,StartTime:2022-08-03 07:44:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.421: INFO: Pod "webserver-deployment-566f96c878-xnv4d" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-xnv4d webserver-deployment-566f96c878- deployment-1987 4e365e57-7f2b-4ff4-8ce3-82273ac7ea13 634166 0 2022-08-03 07:44:36 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 a2fdc23f-e875-49fe-8a06-5f5a7aa40302 0xc003c3bce0 0xc003c3bce1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zdszm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zdszm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-40,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.425: INFO: Pod "webserver-deployment-5d9fdcc779-2rtnv" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-2rtnv webserver-deployment-5d9fdcc779- deployment-1987 f679953f-87e0-4f3a-bcdb-e1e3bbbb58fd 634195 0 2022-08-03 07:44:36 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 41f600b6-7ec7-48bc-978b-f74681f6d5d9 0xc003c3be40 0xc003c3be41}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hqv8m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hqv8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.50,PodIP:,StartTime:2022-08-03 07:44:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.431: INFO: Pod "webserver-deployment-5d9fdcc779-2vc2b" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-2vc2b webserver-deployment-5d9fdcc779- deployment-1987 027cc8bf-d8a0-424d-9eff-d608cdf13c8c 634176 0 2022-08-03 07:44:36 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 41f600b6-7ec7-48bc-978b-f74681f6d5d9 0xc003c3bfd7 0xc003c3bfd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tcmjp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tcmjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.50,PodIP:,StartTime:2022-08-03 07:44:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.431: INFO: Pod "webserver-deployment-5d9fdcc779-45gg5" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-45gg5 webserver-deployment-5d9fdcc779- deployment-1987 3ba240e7-dcf3-4569-9ca7-d8ffa0086d24 634164 0 2022-08-03 07:44:36 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 41f600b6-7ec7-48bc-978b-f74681f6d5d9 0xc004e8a177 0xc004e8a178}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gpxq4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gpxq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.432: INFO: Pod "webserver-deployment-5d9fdcc779-4wx94" is available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-4wx94 webserver-deployment-5d9fdcc779- deployment-1987 82a5ec00-934b-440b-aaa7-9ad514819497 634019 0 2022-08-03 07:44:24 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[cni.projectcalico.org/ipv4pools:["default-ipv4-ippool"] dce.daocloud.io/parcel.egress.burst:0 dce.daocloud.io/parcel.egress.rate:0 dce.daocloud.io/parcel.ingress.burst:0 dce.daocloud.io/parcel.ingress.rate:0] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 41f600b6-7ec7-48bc-978b-f74681f6d5d9 0xc004e8a2d0 0xc004e8a2d1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-b8jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b8jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.50,PodIP:172.29.175.61,StartTime:2022-08-03 07:44:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-03 07:44:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:docker://fe2508d98324df2da4b26089984f16f68e9cd3b1e8b78d2a94df24a22d61ff57,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.29.175.61,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.432: INFO: Pod "webserver-deployment-5d9fdcc779-5bqbq" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-5bqbq webserver-deployment-5d9fdcc779- deployment-1987 6464bee4-d4d1-4901-b1e0-059223b70d86 634157 0 2022-08-03 07:44:36 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 41f600b6-7ec7-48bc-978b-f74681f6d5d9 0xc004e8a487 0xc004e8a488}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8svqq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8svqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-40,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.40,PodIP:,StartTime:2022-08-03 07:44:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.432: INFO: Pod "webserver-deployment-5d9fdcc779-6k6tx" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-6k6tx webserver-deployment-5d9fdcc779- deployment-1987 708bcf9b-5bbf-4959-ba76-b27610f2246a 634144 0 2022-08-03 07:44:36 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 41f600b6-7ec7-48bc-978b-f74681f6d5d9 0xc004e8a627 0xc004e8a628}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5vcpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5vcpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.433: INFO: Pod "webserver-deployment-5d9fdcc779-6k8v9" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-6k8v9 webserver-deployment-5d9fdcc779- deployment-1987 f3e09d9f-e615-4cc3-9d22-ce42c77aa436 634140 0 2022-08-03 07:44:36 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 41f600b6-7ec7-48bc-978b-f74681f6d5d9 0xc004e8a780 0xc004e8a781}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jkj9s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jkj9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.434: INFO: Pod "webserver-deployment-5d9fdcc779-blpjq" is available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-blpjq webserver-deployment-5d9fdcc779- deployment-1987 05fc9cec-76e0-41f4-a967-6ac631a09c78 633992 0 2022-08-03 07:44:24 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[cni.projectcalico.org/ipv4pools:["default-ipv4-ippool"] dce.daocloud.io/parcel.egress.burst:0 dce.daocloud.io/parcel.egress.rate:0 dce.daocloud.io/parcel.ingress.burst:0 dce.daocloud.io/parcel.ingress.rate:0] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 41f600b6-7ec7-48bc-978b-f74681f6d5d9 0xc004e8a8d0 0xc004e8a8d1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mzclb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mzclb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-40,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.40,PodIP:172.29.31.67,StartTime:2022-08-03 07:44:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-03 07:44:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:docker://2f73226aab4d8bd1851b38b150492359c5d5c0d39c0f14e90802a1f429984934,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.29.31.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.434: INFO: Pod "webserver-deployment-5d9fdcc779-cnzfn" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-cnzfn webserver-deployment-5d9fdcc779- deployment-1987 29ffeab6-4d9e-4fff-bcbd-9d314f96174c 634153 0 2022-08-03 07:44:36 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 41f600b6-7ec7-48bc-978b-f74681f6d5d9 0xc004e8aa87 0xc004e8aa88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-89rlr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-89rlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.444: INFO: Pod "webserver-deployment-5d9fdcc779-fxqpw" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-fxqpw webserver-deployment-5d9fdcc779- deployment-1987 0f07fffb-9058-413f-b611-b25600bfa0c4 634163 0 2022-08-03 07:44:36 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 41f600b6-7ec7-48bc-978b-f74681f6d5d9 0xc004e8abe0 0xc004e8abe1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bfqvd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bfqvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.444: INFO: Pod "webserver-deployment-5d9fdcc779-hbtkc" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-hbtkc webserver-deployment-5d9fdcc779- deployment-1987 617c9b95-6bcc-421c-87bd-06f629f4a41b 634245 0 2022-08-03 07:44:36 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 41f600b6-7ec7-48bc-978b-f74681f6d5d9 0xc004e8ad30 0xc004e8ad31}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-g2flf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g2flf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-40,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.40,PodIP:,StartTime:2022-08-03 07:44:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.445: INFO: Pod "webserver-deployment-5d9fdcc779-hwsrp" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-hwsrp webserver-deployment-5d9fdcc779- deployment-1987 24c17f1b-a4b4-43b1-8a9d-c3b7e56ac972 634155 0 2022-08-03 07:44:36 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 41f600b6-7ec7-48bc-978b-f74681f6d5d9 0xc004e8aec7 0xc004e8aec8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9ctq8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9ctq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-40,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.40,PodIP:,StartTime:2022-08-03 07:44:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.448: INFO: Pod "webserver-deployment-5d9fdcc779-j8zk8" is available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-j8zk8 webserver-deployment-5d9fdcc779- deployment-1987 ae4d6c69-0b68-41e4-b9af-ff0d3c9ac938 633986 0 2022-08-03 07:44:24 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[cni.projectcalico.org/ipv4pools:["default-ipv4-ippool"] dce.daocloud.io/parcel.egress.burst:0 dce.daocloud.io/parcel.egress.rate:0 dce.daocloud.io/parcel.ingress.burst:0 dce.daocloud.io/parcel.ingress.rate:0] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 41f600b6-7ec7-48bc-978b-f74681f6d5d9 0xc004e8b067 0xc004e8b068}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-drwtv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-drwtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-40,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.40,PodIP:172.29.31.96,StartTime:2022-08-03 07:44:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-03 07:44:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:docker://76ffd46396e9bd8de1b66a2783c59e7afab57a6b72f0f33d6ed19aa0f27f73e4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.29.31.96,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.448: INFO: Pod "webserver-deployment-5d9fdcc779-ksjfv" is available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-ksjfv webserver-deployment-5d9fdcc779- deployment-1987 5c775b2d-3f13-4798-8d5e-06d321f9f5c9 634010 0 2022-08-03 07:44:24 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[cni.projectcalico.org/ipv4pools:["default-ipv4-ippool"] dce.daocloud.io/parcel.egress.burst:0 dce.daocloud.io/parcel.egress.rate:0 dce.daocloud.io/parcel.ingress.burst:0 dce.daocloud.io/parcel.ingress.rate:0] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 41f600b6-7ec7-48bc-978b-f74681f6d5d9 0xc004e8b227 0xc004e8b228}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zgdsk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zgdsk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.50,PodIP:172.29.175.36,StartTime:2022-08-03 07:44:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-03 07:44:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:docker://02c0bac25b3588560a1ae53ec495ba3f5d5a66f70354a765dd70718b66f653c7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.29.175.36,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.448: INFO: Pod "webserver-deployment-5d9fdcc779-lb8sw" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-lb8sw webserver-deployment-5d9fdcc779- deployment-1987 f4e4a72c-4f40-47fc-b63f-7b35b71bef4a 634171 0 2022-08-03 07:44:36 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 41f600b6-7ec7-48bc-978b-f74681f6d5d9 0xc004e8b3e7 0xc004e8b3e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8hhfh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8hhfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-40,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.40,PodIP:,StartTime:2022-08-03 07:44:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.449: INFO: Pod "webserver-deployment-5d9fdcc779-qzpsp" is available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-qzpsp webserver-deployment-5d9fdcc779- deployment-1987 bc196d44-d5fd-49f0-82b9-e645c42f6d95 633996 0 2022-08-03 07:44:24 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[cni.projectcalico.org/ipv4pools:["default-ipv4-ippool"] dce.daocloud.io/parcel.egress.burst:0 dce.daocloud.io/parcel.egress.rate:0 dce.daocloud.io/parcel.ingress.burst:0 dce.daocloud.io/parcel.ingress.rate:0] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 41f600b6-7ec7-48bc-978b-f74681f6d5d9 0xc004e8b587 0xc004e8b588}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5pqlj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5pqlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-40,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.40,PodIP:172.29.31.111,StartTime:2022-08-03 07:44:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-03 07:44:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:docker://51c5b830a0b2ded412786ba41715c49d50bb445ca30196a0b0ea7d456701308d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.29.31.111,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.449: INFO: Pod "webserver-deployment-5d9fdcc779-sd9hr" is available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-sd9hr webserver-deployment-5d9fdcc779- deployment-1987 8f8f3fef-8678-407f-8ae3-de43c8ee29a8 634030 0 2022-08-03 07:44:24 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[cni.projectcalico.org/ipv4pools:["default-ipv4-ippool"] dce.daocloud.io/parcel.egress.burst:0 dce.daocloud.io/parcel.egress.rate:0 dce.daocloud.io/parcel.ingress.burst:0 dce.daocloud.io/parcel.ingress.rate:0] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 41f600b6-7ec7-48bc-978b-f74681f6d5d9 0xc004e8b757 0xc004e8b758}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x4xxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x4xxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.50,PodIP:172.29.175.30,StartTime:2022-08-03 07:44:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-03 07:44:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:docker://876aee273a8349751e619f33e7f8fd69a085e1a672ec4a89510a0e00c5a5b167,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.29.175.30,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.449: INFO: Pod "webserver-deployment-5d9fdcc779-sqxn5" is available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-sqxn5 webserver-deployment-5d9fdcc779- deployment-1987 9616c143-6b83-4cfa-9ff4-2c6648331230 634025 0 2022-08-03 07:44:24 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[cni.projectcalico.org/ipv4pools:["default-ipv4-ippool"] dce.daocloud.io/parcel.egress.burst:0 dce.daocloud.io/parcel.egress.rate:0 dce.daocloud.io/parcel.ingress.burst:0 dce.daocloud.io/parcel.ingress.rate:0] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 41f600b6-7ec7-48bc-978b-f74681f6d5d9 0xc004e8b917 0xc004e8b918}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wcxhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wcxhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-40,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.40,PodIP:172.29.31.112,StartTime:2022-08-03 07:44:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-03 07:44:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:docker://f376c3c40a8e3fe67d10b383d114e45271b5d7e91d097af4e4b6822fb7b1c161,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.29.31.112,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.452: INFO: Pod "webserver-deployment-5d9fdcc779-vnfwx" is available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-vnfwx webserver-deployment-5d9fdcc779- deployment-1987 18f16dda-dbcd-4c85-b8d2-1a90cf3df502 634022 0 2022-08-03 07:44:24 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[cni.projectcalico.org/ipv4pools:["default-ipv4-ippool"] dce.daocloud.io/parcel.egress.burst:0 dce.daocloud.io/parcel.egress.rate:0 dce.daocloud.io/parcel.ingress.burst:0 dce.daocloud.io/parcel.ingress.rate:0] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 41f600b6-7ec7-48bc-978b-f74681f6d5d9 0xc004e8bad7 0xc004e8bad8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jrfkx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jrfkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-40,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.40,PodIP:172.29.31.79,StartTime:2022-08-03 07:44:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-03 07:44:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:docker://21c9d1bcf6fb605518e34717f0da7e3bb57263e79a947750c526eabb9967665f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.29.31.79,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:44:38.452: INFO: Pod "webserver-deployment-5d9fdcc779-vznxc" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-vznxc webserver-deployment-5d9fdcc779- deployment-1987 26bce457-962a-4ca2-b7ec-e6fc340a025d 634161 0 2022-08-03 07:44:36 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 41f600b6-7ec7-48bc-978b-f74681f6d5d9 0xc004e8bca7 0xc004e8bca8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lp4lc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lp4lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:44:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.50,PodIP:,StartTime:2022-08-03 07:44:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:44:38.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-1987" for this suite. + +• [SLOW TEST:14.534 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + deployment should support proportional scaling [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":346,"completed":242,"skipped":4606,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected combined + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected combined + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:44:38.495: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-projected-all-test-volume-f63bbe28-c218-4f1a-8764-960010f2a947 +STEP: Creating secret with name secret-projected-all-test-volume-248d7147-e3e7-42aa-bc04-5689db95cae9 +STEP: Creating a pod to test Check all projections for projected volume plugin +Aug 3 07:44:38.759: INFO: Waiting up to 5m0s for pod "projected-volume-fe73198e-c721-4dca-88a7-c17074a230dc" in namespace "projected-229" to be "Succeeded or Failed" +Aug 3 07:44:38.776: INFO: Pod "projected-volume-fe73198e-c721-4dca-88a7-c17074a230dc": Phase="Pending", Reason="", readiness=false. Elapsed: 17.080189ms +Aug 3 07:44:40.801: INFO: Pod "projected-volume-fe73198e-c721-4dca-88a7-c17074a230dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041913143s +Aug 3 07:44:42.827: INFO: Pod "projected-volume-fe73198e-c721-4dca-88a7-c17074a230dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068048207s +Aug 3 07:44:44.838: INFO: Pod "projected-volume-fe73198e-c721-4dca-88a7-c17074a230dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079030371s +Aug 3 07:44:46.852: INFO: Pod "projected-volume-fe73198e-c721-4dca-88a7-c17074a230dc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093207222s +Aug 3 07:44:48.878: INFO: Pod "projected-volume-fe73198e-c721-4dca-88a7-c17074a230dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.118924072s +STEP: Saw pod success +Aug 3 07:44:48.878: INFO: Pod "projected-volume-fe73198e-c721-4dca-88a7-c17074a230dc" satisfied condition "Succeeded or Failed" +Aug 3 07:44:48.894: INFO: Trying to get logs from node dce-10-6-213-50 pod projected-volume-fe73198e-c721-4dca-88a7-c17074a230dc container projected-all-volume-test: +STEP: delete the pod +Aug 3 07:44:49.058: INFO: Waiting for pod projected-volume-fe73198e-c721-4dca-88a7-c17074a230dc to disappear +Aug 3 07:44:49.078: INFO: Pod projected-volume-fe73198e-c721-4dca-88a7-c17074a230dc no longer exists +[AfterEach] [sig-storage] Projected combined + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:44:49.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-229" for this suite. + +• [SLOW TEST:10.662 seconds] +[sig-storage] Projected combined +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":346,"completed":243,"skipped":4622,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Proxy server + should support --unix-socket=/path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:44:49.157: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should support --unix-socket=/path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Starting the proxy +Aug 3 07:44:49.370: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-156 proxy --unix-socket=/tmp/kubectl-proxy-unix1038386595/test' +STEP: retrieving proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:44:49.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-156" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":346,"completed":244,"skipped":4635,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should run through the lifecycle of a ServiceAccount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:44:49.633: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run through the lifecycle of a ServiceAccount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a ServiceAccount +STEP: watching for the ServiceAccount to be added +STEP: patching the ServiceAccount +STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) +STEP: deleting the ServiceAccount +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:44:49.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-2850" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":346,"completed":245,"skipped":4661,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:44:49.859: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:44:49.980: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Aug 3 07:44:54.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-5404 --namespace=crd-publish-openapi-5404 create -f -' +Aug 3 07:44:55.920: INFO: stderr: "" +Aug 3 07:44:55.920: INFO: stdout: "e2e-test-crd-publish-openapi-3756-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Aug 3 07:44:55.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-5404 --namespace=crd-publish-openapi-5404 delete e2e-test-crd-publish-openapi-3756-crds test-cr' +Aug 3 07:44:56.045: INFO: stderr: "" +Aug 3 07:44:56.045: INFO: stdout: "e2e-test-crd-publish-openapi-3756-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +Aug 3 07:44:56.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-5404 --namespace=crd-publish-openapi-5404 apply -f -' +Aug 3 07:44:57.038: INFO: stderr: "" +Aug 3 07:44:57.038: INFO: stdout: "e2e-test-crd-publish-openapi-3756-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Aug 3 07:44:57.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-5404 --namespace=crd-publish-openapi-5404 delete e2e-test-crd-publish-openapi-3756-crds test-cr' +Aug 3 07:44:57.153: INFO: stderr: "" +Aug 3 07:44:57.154: INFO: stdout: "e2e-test-crd-publish-openapi-3756-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Aug 3 07:44:57.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-5404 explain e2e-test-crd-publish-openapi-3756-crds' +Aug 3 07:44:57.425: INFO: stderr: "" +Aug 3 07:44:57.426: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-3756-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:45:01.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-5404" for this suite. + +• [SLOW TEST:11.638 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":346,"completed":246,"skipped":4679,"failed":0} +SSSSSSSS +------------------------------ +[sig-apps] CronJob + should support CronJob API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:45:01.497: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename cronjob +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support CronJob API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a cronjob +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Aug 3 07:45:01.590: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Aug 3 07:45:01.599: INFO: starting watch +STEP: patching +STEP: updating +Aug 3 07:45:01.633: INFO: waiting for watch events with expected annotations +Aug 3 07:45:01.633: INFO: saw patched and updated annotations +STEP: patching /status +STEP: updating /status +STEP: get /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:45:01.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-666" for this suite. +•{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":346,"completed":247,"skipped":4687,"failed":0} + +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate pod and apply defaults after mutation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:45:01.770: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 3 07:45:02.433: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Aug 3 07:45:04.461: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 45, 2, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 45, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 45, 2, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 45, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 07:45:06.478: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 45, 2, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 45, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 45, 2, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 45, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 3 07:45:09.509: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate pod and apply defaults after mutation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the mutating pod webhook via the AdmissionRegistration API +STEP: create a pod that should be updated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:45:09.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-8431" for this suite. +STEP: Destroying namespace "webhook-8431-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:7.950 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should mutate pod and apply defaults after mutation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":346,"completed":248,"skipped":4687,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:45:09.721: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating replication controller my-hostname-basic-33e142dc-0ca9-4fcc-b5db-d91b65125f1f +Aug 3 07:45:09.816: INFO: Pod name my-hostname-basic-33e142dc-0ca9-4fcc-b5db-d91b65125f1f: Found 0 pods out of 1 +Aug 3 07:45:14.825: INFO: Pod name my-hostname-basic-33e142dc-0ca9-4fcc-b5db-d91b65125f1f: Found 1 pods out of 1 +Aug 3 07:45:14.825: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-33e142dc-0ca9-4fcc-b5db-d91b65125f1f" are running +Aug 3 07:45:14.833: INFO: Pod "my-hostname-basic-33e142dc-0ca9-4fcc-b5db-d91b65125f1f-ksxbh" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-08-03 07:45:09 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-08-03 07:45:13 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-08-03 07:45:13 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-08-03 07:45:09 +0000 UTC Reason: Message:}]) +Aug 3 07:45:14.833: INFO: Trying to dial the pod +Aug 3 07:45:19.857: INFO: Controller my-hostname-basic-33e142dc-0ca9-4fcc-b5db-d91b65125f1f: Got expected result from replica 1 [my-hostname-basic-33e142dc-0ca9-4fcc-b5db-d91b65125f1f-ksxbh]: "my-hostname-basic-33e142dc-0ca9-4fcc-b5db-d91b65125f1f-ksxbh", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:45:19.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-6194" for this suite. + +• [SLOW TEST:10.156 seconds] +[sig-apps] ReplicationController +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":346,"completed":249,"skipped":4700,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should be submitted and removed [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:45:19.880: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 +[It] should be submitted and removed [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: setting up watch +STEP: submitting the pod to kubernetes +Aug 3 07:45:19.963: INFO: observed the pod list +STEP: verifying the pod is in kubernetes +STEP: verifying pod creation was observed +STEP: deleting the pod gracefully +STEP: verifying pod deletion was observed +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:45:28.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-4040" for this suite. + +• [SLOW TEST:8.251 seconds] +[sig-node] Pods +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should be submitted and removed [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":346,"completed":250,"skipped":4786,"failed":0} +SSSSSS +------------------------------ +[sig-network] HostPort + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:45:28.129: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename hostport +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 +[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled +Aug 3 07:45:28.209: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:45:30.219: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:45:32.221: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:45:34.224: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.6.213.40 on the node which pod1 resides and expect scheduled +Aug 3 07:45:34.238: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:45:36.251: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:45:38.253: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.6.213.40 but use UDP protocol on the node which pod2 resides +Aug 3 07:45:38.278: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:45:40.297: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:45:42.291: INFO: The status of Pod pod3 is Running (Ready = true) +Aug 3 07:45:42.313: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:45:44.332: INFO: The status of Pod e2e-host-exec is Running (Ready = true) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 +Aug 3 07:45:44.341: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.6.213.40 http://127.0.0.1:54323/hostname] Namespace:hostport-620 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 07:45:44.341: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 07:45:44.343: INFO: ExecWithOptions: Clientset creation +Aug 3 07:45:44.343: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/hostport-620/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+10.6.213.40+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.6.213.40, port: 54323 +Aug 3 07:45:44.563: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.6.213.40:54323/hostname] Namespace:hostport-620 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 07:45:44.563: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 07:45:44.564: INFO: ExecWithOptions: Clientset creation +Aug 3 07:45:44.564: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/hostport-620/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+http%3A%2F%2F10.6.213.40%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.6.213.40, port: 54323 UDP +Aug 3 07:45:44.785: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.6.213.40 54323] Namespace:hostport-620 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 07:45:44.785: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 07:45:44.787: INFO: ExecWithOptions: Clientset creation +Aug 3 07:45:44.787: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/hostport-620/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=nc+-vuz+-w+5+10.6.213.40+54323&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) +[AfterEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:45:49.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "hostport-620" for this suite. + +• [SLOW TEST:21.862 seconds] +[sig-network] HostPort +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":346,"completed":251,"skipped":4792,"failed":0} +SSS +------------------------------ +[sig-storage] Downward API volume + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:45:49.992: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Aug 3 07:45:50.084: INFO: The status of Pod labelsupdatea36dfc15-5eea-4c55-9581-f06b4af30097 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:45:52.097: INFO: The status of Pod labelsupdatea36dfc15-5eea-4c55-9581-f06b4af30097 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:45:54.101: INFO: The status of Pod labelsupdatea36dfc15-5eea-4c55-9581-f06b4af30097 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:45:56.098: INFO: The status of Pod labelsupdatea36dfc15-5eea-4c55-9581-f06b4af30097 is Running (Ready = true) +Aug 3 07:45:56.675: INFO: Successfully updated pod "labelsupdatea36dfc15-5eea-4c55-9581-f06b4af30097" +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:45:58.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9183" for this suite. + +• [SLOW TEST:8.760 seconds] +[sig-storage] Downward API volume +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":252,"skipped":4795,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:45:58.753: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on node default medium +Aug 3 07:45:58.817: INFO: Waiting up to 5m0s for pod "pod-3611c56a-34e3-4ae3-a5fb-58955538cadd" in namespace "emptydir-5739" to be "Succeeded or Failed" +Aug 3 07:45:58.831: INFO: Pod "pod-3611c56a-34e3-4ae3-a5fb-58955538cadd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.262103ms +Aug 3 07:46:00.841: INFO: Pod "pod-3611c56a-34e3-4ae3-a5fb-58955538cadd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024058907s +Aug 3 07:46:02.852: INFO: Pod "pod-3611c56a-34e3-4ae3-a5fb-58955538cadd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034815433s +Aug 3 07:46:04.871: INFO: Pod "pod-3611c56a-34e3-4ae3-a5fb-58955538cadd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054013019s +STEP: Saw pod success +Aug 3 07:46:04.871: INFO: Pod "pod-3611c56a-34e3-4ae3-a5fb-58955538cadd" satisfied condition "Succeeded or Failed" +Aug 3 07:46:04.883: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-3611c56a-34e3-4ae3-a5fb-58955538cadd container test-container: +STEP: delete the pod +Aug 3 07:46:04.999: INFO: Waiting for pod pod-3611c56a-34e3-4ae3-a5fb-58955538cadd to disappear +Aug 3 07:46:05.019: INFO: Pod pod-3611c56a-34e3-4ae3-a5fb-58955538cadd no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:46:05.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5739" for this suite. + +• [SLOW TEST:6.298 seconds] +[sig-storage] EmptyDir volumes +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":253,"skipped":4834,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount projected service account token [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:46:05.054: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should mount projected service account token [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test service account token: +Aug 3 07:46:05.165: INFO: Waiting up to 5m0s for pod "test-pod-1d8a08c3-87cb-4b25-877e-27529135fb6f" in namespace "svcaccounts-9315" to be "Succeeded or Failed" +Aug 3 07:46:05.173: INFO: Pod "test-pod-1d8a08c3-87cb-4b25-877e-27529135fb6f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.878977ms +Aug 3 07:46:07.188: INFO: Pod "test-pod-1d8a08c3-87cb-4b25-877e-27529135fb6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023110887s +Aug 3 07:46:09.202: INFO: Pod "test-pod-1d8a08c3-87cb-4b25-877e-27529135fb6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036865594s +Aug 3 07:46:11.216: INFO: Pod "test-pod-1d8a08c3-87cb-4b25-877e-27529135fb6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051132758s +STEP: Saw pod success +Aug 3 07:46:11.216: INFO: Pod "test-pod-1d8a08c3-87cb-4b25-877e-27529135fb6f" satisfied condition "Succeeded or Failed" +Aug 3 07:46:11.221: INFO: Trying to get logs from node dce-10-6-213-50 pod test-pod-1d8a08c3-87cb-4b25-877e-27529135fb6f container agnhost-container: +STEP: delete the pod +Aug 3 07:46:11.264: INFO: Waiting for pod test-pod-1d8a08c3-87cb-4b25-877e-27529135fb6f to disappear +Aug 3 07:46:11.272: INFO: Pod test-pod-1d8a08c3-87cb-4b25-877e-27529135fb6f no longer exists +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:46:11.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-9315" for this suite. + +• [SLOW TEST:6.240 seconds] +[sig-auth] ServiceAccounts +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 + should mount projected service account token [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":346,"completed":254,"skipped":4854,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:46:11.295: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-bf5c0695-04d4-4172-929b-6fd3434aa668 +STEP: Creating a pod to test consume secrets +Aug 3 07:46:11.381: INFO: Waiting up to 5m0s for pod "pod-secrets-9515d964-b5f3-4f4c-b011-275de3212b76" in namespace "secrets-2182" to be "Succeeded or Failed" +Aug 3 07:46:11.389: INFO: Pod "pod-secrets-9515d964-b5f3-4f4c-b011-275de3212b76": Phase="Pending", Reason="", readiness=false. Elapsed: 7.87958ms +Aug 3 07:46:13.421: INFO: Pod "pod-secrets-9515d964-b5f3-4f4c-b011-275de3212b76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039609218s +Aug 3 07:46:15.434: INFO: Pod "pod-secrets-9515d964-b5f3-4f4c-b011-275de3212b76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05276173s +STEP: Saw pod success +Aug 3 07:46:15.434: INFO: Pod "pod-secrets-9515d964-b5f3-4f4c-b011-275de3212b76" satisfied condition "Succeeded or Failed" +Aug 3 07:46:15.441: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-secrets-9515d964-b5f3-4f4c-b011-275de3212b76 container secret-volume-test: +STEP: delete the pod +Aug 3 07:46:15.477: INFO: Waiting for pod pod-secrets-9515d964-b5f3-4f4c-b011-275de3212b76 to disappear +Aug 3 07:46:15.484: INFO: Pod pod-secrets-9515d964-b5f3-4f4c-b011-275de3212b76 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:46:15.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-2182" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":255,"skipped":4870,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:46:15.505: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 +[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod liveness-de5ebafd-4834-4577-bb9d-0655cb617e5e in namespace container-probe-5623 +Aug 3 07:46:19.599: INFO: Started pod liveness-de5ebafd-4834-4577-bb9d-0655cb617e5e in namespace container-probe-5623 +STEP: checking the pod's current state and verifying that restartCount is present +Aug 3 07:46:19.609: INFO: Initial restart count of pod liveness-de5ebafd-4834-4577-bb9d-0655cb617e5e is 0 +Aug 3 07:46:37.742: INFO: Restart count of pod container-probe-5623/liveness-de5ebafd-4834-4577-bb9d-0655cb617e5e is now 1 (18.132562659s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:46:37.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-5623" for this suite. + +• [SLOW TEST:22.317 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":256,"skipped":4883,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a configMap. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:46:37.823: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a configMap. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ConfigMap +STEP: Ensuring resource quota status captures configMap creation +STEP: Deleting a ConfigMap +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:47:06.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-5448" for this suite. + +• [SLOW TEST:28.222 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a configMap. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":346,"completed":257,"skipped":4912,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-node] PodTemplates + should run the lifecycle of PodTemplates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:47:06.046: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename podtemplate +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run the lifecycle of PodTemplates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:47:06.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-6614" for this suite. +•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":346,"completed":258,"skipped":4923,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:47:06.176: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 +STEP: create the container to handle the HTTPGet hook request. +Aug 3 07:47:06.260: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:47:08.277: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:47:10.273: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:47:12.275: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Aug 3 07:47:12.389: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:47:14.406: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:47:16.401: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:47:18.408: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Aug 3 07:47:18.445: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Aug 3 07:47:18.451: INFO: Pod pod-with-poststart-exec-hook still exists +Aug 3 07:47:20.452: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Aug 3 07:47:20.463: INFO: Pod pod-with-poststart-exec-hook still exists +Aug 3 07:47:22.452: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Aug 3 07:47:22.463: INFO: Pod pod-with-poststart-exec-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:47:22.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-7839" for this suite. + +• [SLOW TEST:16.314 seconds] +[sig-node] Container Lifecycle Hook +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44 + should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":346,"completed":259,"skipped":4940,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] LimitRange + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] LimitRange + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:47:22.491: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename limitrange +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a LimitRange +STEP: Setting up watch +STEP: Submitting a LimitRange +Aug 3 07:47:22.599: INFO: observed the limitRanges list +STEP: Verifying LimitRange creation was observed +STEP: Fetching the LimitRange to ensure it has proper values +Aug 3 07:47:22.612: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Aug 3 07:47:22.612: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with no resource requirements +STEP: Ensuring Pod has resource requirements applied from LimitRange +Aug 3 07:47:22.633: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Aug 3 07:47:22.633: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with partial resource requirements +STEP: Ensuring Pod has merged resource requirements applied from LimitRange +Aug 3 07:47:22.652: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] +Aug 3 07:47:22.653: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Failing to create a Pod with less than min resources +STEP: Failing to create a Pod with more than max resources +STEP: Updating a LimitRange +STEP: Verifying LimitRange updating is effective +STEP: Creating a Pod with less than former min resources +STEP: Failing to create a Pod with more than max resources +STEP: Deleting a LimitRange +STEP: Verifying the LimitRange was deleted +Aug 3 07:47:29.732: INFO: limitRange is already deleted +STEP: Creating a Pod with more than former max resources +[AfterEach] [sig-scheduling] LimitRange + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:47:29.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "limitrange-351" for this suite. + +• [SLOW TEST:7.280 seconds] +[sig-scheduling] LimitRange +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":346,"completed":260,"skipped":4968,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:47:29.772: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Aug 3 07:47:34.930: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:47:34.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-247" for this suite. + +• [SLOW TEST:5.207 seconds] +[sig-node] Container Runtime +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + blackbox test + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 + on terminated container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 + should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":346,"completed":261,"skipped":4978,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] RuntimeClass + should support RuntimeClasses API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] RuntimeClass + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:47:34.979: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename runtimeclass +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support RuntimeClasses API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/node.k8s.io +STEP: getting /apis/node.k8s.io/v1 +STEP: creating +STEP: watching +Aug 3 07:47:35.089: INFO: starting watch +STEP: getting +STEP: listing +STEP: patching +STEP: updating +Aug 3 07:47:35.122: INFO: waiting for watch events with expected annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-node] RuntimeClass + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:47:35.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "runtimeclass-8753" for this suite. +•{"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":346,"completed":262,"skipped":4998,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should support configurable pod DNS nameservers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:47:35.177: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support configurable pod DNS nameservers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... +Aug 3 07:47:35.249: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-5953 59337e5f-8a93-4804-ab22-b3a0c8473223 636042 0 2022-08-03 07:47:35 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dw2tf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dw2tf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:47:35.259: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:47:37.276: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:47:39.271: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) +STEP: Verifying customized DNS suffix list is configured on pod... +Aug 3 07:47:39.271: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5953 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 07:47:39.271: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 07:47:39.273: INFO: ExecWithOptions: Clientset creation +Aug 3 07:47:39.273: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/dns-5953/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) +STEP: Verifying customized DNS server is configured on pod... +Aug 3 07:47:39.494: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5953 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 07:47:39.494: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 07:47:39.496: INFO: ExecWithOptions: Clientset creation +Aug 3 07:47:39.496: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/dns-5953/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-server-list&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) +Aug 3 07:47:39.723: INFO: Deleting pod test-dns-nameservers... +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:47:39.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-5953" for this suite. +•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":346,"completed":263,"skipped":5015,"failed":0} +SSSS +------------------------------ +[sig-cli] Kubectl client Kubectl server-side dry-run + should check if kubectl can dry-run update Pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:47:39.762: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl can dry-run update Pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 +Aug 3 07:47:39.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-6187 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Aug 3 07:47:39.962: INFO: stderr: "" +Aug 3 07:47:39.962: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: replace the image in the pod with server-side dry-run +Aug 3 07:47:39.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-6187 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-2"}]}} --dry-run=server' +Aug 3 07:47:41.115: INFO: stderr: "" +Aug 3 07:47:41.115: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 +Aug 3 07:47:41.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-6187 delete pods e2e-test-httpd-pod' +Aug 3 07:47:46.720: INFO: stderr: "" +Aug 3 07:47:46.720: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:47:46.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6187" for this suite. + +• [SLOW TEST:7.003 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Kubectl server-side dry-run + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:926 + should check if kubectl can dry-run update Pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":346,"completed":264,"skipped":5019,"failed":0} +S +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:47:46.765: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Aug 3 07:47:51.945: INFO: Expected: &{OK} to match Container's Termination Message: OK -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:47:51.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-9537" for this suite. + +• [SLOW TEST:5.257 seconds] +[sig-node] Container Runtime +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + blackbox test + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 + on terminated container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 + should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":346,"completed":265,"skipped":5020,"failed":0} +SSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:47:52.022: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on node default medium +Aug 3 07:47:52.122: INFO: Waiting up to 5m0s for pod "pod-bdc59951-0267-4e99-9a80-4b73bc6013ee" in namespace "emptydir-1414" to be "Succeeded or Failed" +Aug 3 07:47:52.129: INFO: Pod "pod-bdc59951-0267-4e99-9a80-4b73bc6013ee": Phase="Pending", Reason="", readiness=false. Elapsed: 7.147164ms +Aug 3 07:47:54.135: INFO: Pod "pod-bdc59951-0267-4e99-9a80-4b73bc6013ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013330448s +Aug 3 07:47:56.150: INFO: Pod "pod-bdc59951-0267-4e99-9a80-4b73bc6013ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027747181s +STEP: Saw pod success +Aug 3 07:47:56.150: INFO: Pod "pod-bdc59951-0267-4e99-9a80-4b73bc6013ee" satisfied condition "Succeeded or Failed" +Aug 3 07:47:56.159: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-bdc59951-0267-4e99-9a80-4b73bc6013ee container test-container: +STEP: delete the pod +Aug 3 07:47:56.203: INFO: Waiting for pod pod-bdc59951-0267-4e99-9a80-4b73bc6013ee to disappear +Aug 3 07:47:56.208: INFO: Pod pod-bdc59951-0267-4e99-9a80-4b73bc6013ee no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:47:56.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1414" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":266,"skipped":5024,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:47:56.231: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename namespaces +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test namespace +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a pod in the namespace +STEP: Waiting for the pod to have running status +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Verifying there are no pods in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:48:13.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-8845" for this suite. +STEP: Destroying namespace "nsdeletetest-8553" for this suite. +Aug 3 07:48:13.585: INFO: Namespace nsdeletetest-8553 was already deleted +STEP: Destroying namespace "nsdeletetest-426" for this suite. + +• [SLOW TEST:17.389 seconds] +[sig-api-machinery] Namespaces [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":346,"completed":267,"skipped":5059,"failed":0} +SSSS +------------------------------ +[sig-network] Services + should be able to change the type from ClusterIP to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:48:13.620: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ClusterIP to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-2770 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service +STEP: creating service externalsvc in namespace services-2770 +STEP: creating replication controller externalsvc in namespace services-2770 +I0803 07:48:13.849683 21 runners.go:193] Created replication controller with name: externalsvc, namespace: services-2770, replica count: 2 +I0803 07:48:16.901160 21 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0803 07:48:19.902127 21 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the ClusterIP service to type=ExternalName +Aug 3 07:48:19.933: INFO: Creating new exec pod +Aug 3 07:48:23.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-2770 exec execpod4rcxs -- /bin/sh -x -c nslookup clusterip-service.services-2770.svc.cluster.local' +Aug 3 07:48:24.299: INFO: stderr: "+ nslookup clusterip-service.services-2770.svc.cluster.local\n" +Aug 3 07:48:24.299: INFO: stdout: "Server:\t\t172.31.0.10\nAddress:\t172.31.0.10#53\n\nclusterip-service.services-2770.svc.cluster.local\tcanonical name = externalsvc.services-2770.svc.cluster.local.\nName:\texternalsvc.services-2770.svc.cluster.local\nAddress: 172.31.84.17\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-2770, will wait for the garbage collector to delete the pods +Aug 3 07:48:24.379: INFO: Deleting ReplicationController externalsvc took: 21.631079ms +Aug 3 07:48:24.480: INFO: Terminating ReplicationController externalsvc pods took: 101.014945ms +Aug 3 07:48:28.540: INFO: Cleaning up the ClusterIP to ExternalName test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:48:28.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-2770" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:14.984 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should be able to change the type from ClusterIP to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":346,"completed":268,"skipped":5063,"failed":0} +SSSSSSSSS +------------------------------ +[sig-apps] Job + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:48:28.604: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename job +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a job +STEP: Ensuring job reaches completions +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:48:40.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-5058" for this suite. + +• [SLOW TEST:12.134 seconds] +[sig-apps] Job +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":346,"completed":269,"skipped":5072,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + Deployment should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:48:40.740: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] Deployment should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:48:40.827: INFO: Creating simple deployment test-new-deployment +Aug 3 07:48:40.863: INFO: deployment "test-new-deployment" doesn't have the required revision set +Aug 3 07:48:42.893: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 48, 40, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 48, 40, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 48, 40, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 48, 40, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the deployment Spec.Replicas was modified +STEP: Patch a scale subresource +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Aug 3 07:48:45.014: INFO: Deployment "test-new-deployment": +&Deployment{ObjectMeta:{test-new-deployment deployment-2445 4d3720ad-2774-4299-9570-a3833052d701 636802 3 2022-08-03 07:48:40 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc007811f28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:2,UpdatedReplicas:2,AvailableReplicas:1,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-5d9fdcc779" has successfully progressed.,LastUpdateTime:2022-08-03 07:48:44 +0000 UTC,LastTransitionTime:2022-08-03 07:48:40 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-08-03 07:48:44 +0000 UTC,LastTransitionTime:2022-08-03 07:48:44 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Aug 3 07:48:45.023: INFO: New ReplicaSet "test-new-deployment-5d9fdcc779" of Deployment "test-new-deployment": +&ReplicaSet{ObjectMeta:{test-new-deployment-5d9fdcc779 deployment-2445 f119b481-065d-4f92-bb3b-8e8156cd5772 636812 3 2022-08-03 07:48:40 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 4d3720ad-2774-4299-9570-a3833052d701 0xc001134397 0xc001134398}] [] []},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 5d9fdcc779,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0011343f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:3,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Aug 3 07:48:45.031: INFO: Pod "test-new-deployment-5d9fdcc779-f8wt4" is not available: +&Pod{ObjectMeta:{test-new-deployment-5d9fdcc779-f8wt4 test-new-deployment-5d9fdcc779- deployment-2445 d91fb529-d9de-46b7-a4c7-5544ab3a1e7d 636815 0 2022-08-03 07:48:44 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet test-new-deployment-5d9fdcc779 f119b481-065d-4f92-bb3b-8e8156cd5772 0xc005544157 0xc005544158}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rv5pb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rv5pb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-40,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:48:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:48:45.031: INFO: Pod "test-new-deployment-5d9fdcc779-ppbpz" is not available: +&Pod{ObjectMeta:{test-new-deployment-5d9fdcc779-ppbpz test-new-deployment-5d9fdcc779- deployment-2445 ef36db5a-2927-48d0-8ab2-2bee53be20d5 636810 0 2022-08-03 07:48:44 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet test-new-deployment-5d9fdcc779 f119b481-065d-4f92-bb3b-8e8156cd5772 0xc0055442b0 0xc0055442b1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tlnfp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tlnfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:48:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:48:45.032: INFO: Pod "test-new-deployment-5d9fdcc779-qg5x6" is available: +&Pod{ObjectMeta:{test-new-deployment-5d9fdcc779-qg5x6 test-new-deployment-5d9fdcc779- deployment-2445 0bf53799-edb3-4594-8b85-43d17fa8f528 636787 0 2022-08-03 07:48:40 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[cni.projectcalico.org/ipv4pools:["default-ipv4-ippool"] dce.daocloud.io/parcel.egress.burst:0 dce.daocloud.io/parcel.egress.rate:0 dce.daocloud.io/parcel.ingress.burst:0 dce.daocloud.io/parcel.ingress.rate:0] [{apps/v1 ReplicaSet test-new-deployment-5d9fdcc779 f119b481-065d-4f92-bb3b-8e8156cd5772 0xc005544400 0xc005544401}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-58p9t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-58p9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:48:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:48:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:48:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:48:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.50,PodIP:172.29.175.43,StartTime:2022-08-03 07:48:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-03 07:48:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:docker://95c8e6bb3f6b18390d0e50f943388f73d13a50fe42835cd39cc1f5b5d1a174a8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.29.175.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 3 07:48:45.032: INFO: Pod "test-new-deployment-5d9fdcc779-vbr5c" is not available: +&Pod{ObjectMeta:{test-new-deployment-5d9fdcc779-vbr5c test-new-deployment-5d9fdcc779- deployment-2445 6916213a-169c-41a4-98e0-8c106ea51250 636803 0 2022-08-03 07:48:44 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet test-new-deployment-5d9fdcc779 f119b481-065d-4f92-bb3b-8e8156cd5772 0xc0055445b7 0xc0055445b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8sfk4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8sfk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-40,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:48:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:48:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:48:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 07:48:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.40,PodIP:,StartTime:2022-08-03 07:48:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:48:45.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-2445" for this suite. +•{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":346,"completed":270,"skipped":5156,"failed":0} + +------------------------------ +[sig-node] Pods + should be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:48:45.051: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 +[It] should be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Aug 3 07:48:45.118: INFO: The status of Pod pod-update-8b670a01-24a9-4685-8a6a-c60ce42d6e44 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:48:47.137: INFO: The status of Pod pod-update-8b670a01-24a9-4685-8a6a-c60ce42d6e44 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:48:49.130: INFO: The status of Pod pod-update-8b670a01-24a9-4685-8a6a-c60ce42d6e44 is Running (Ready = true) +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Aug 3 07:48:49.661: INFO: Successfully updated pod "pod-update-8b670a01-24a9-4685-8a6a-c60ce42d6e44" +STEP: verifying the updated pod is in kubernetes +Aug 3 07:48:49.681: INFO: Pod update OK +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:48:49.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-1977" for this suite. +•{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":346,"completed":271,"skipped":5156,"failed":0} + +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:48:49.697: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Aug 3 07:48:53.827: INFO: Expected: &{} to match Container's Termination Message: -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:48:53.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-1612" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":346,"completed":272,"skipped":5156,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:48:53.920: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Aug 3 07:48:54.099: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b39e726f-9099-408f-bcb8-594b57e232a4" in namespace "projected-4267" to be "Succeeded or Failed" +Aug 3 07:48:54.201: INFO: Pod "downwardapi-volume-b39e726f-9099-408f-bcb8-594b57e232a4": Phase="Pending", Reason="", readiness=false. Elapsed: 101.319408ms +Aug 3 07:48:56.223: INFO: Pod "downwardapi-volume-b39e726f-9099-408f-bcb8-594b57e232a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123541523s +Aug 3 07:48:58.236: INFO: Pod "downwardapi-volume-b39e726f-9099-408f-bcb8-594b57e232a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136458792s +Aug 3 07:49:00.246: INFO: Pod "downwardapi-volume-b39e726f-9099-408f-bcb8-594b57e232a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.146934336s +STEP: Saw pod success +Aug 3 07:49:00.247: INFO: Pod "downwardapi-volume-b39e726f-9099-408f-bcb8-594b57e232a4" satisfied condition "Succeeded or Failed" +Aug 3 07:49:00.257: INFO: Trying to get logs from node dce-10-6-213-50 pod downwardapi-volume-b39e726f-9099-408f-bcb8-594b57e232a4 container client-container: +STEP: delete the pod +Aug 3 07:49:00.297: INFO: Waiting for pod downwardapi-volume-b39e726f-9099-408f-bcb8-594b57e232a4 to disappear +Aug 3 07:49:00.308: INFO: Pod downwardapi-volume-b39e726f-9099-408f-bcb8-594b57e232a4 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:49:00.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4267" for this suite. + +• [SLOW TEST:6.414 seconds] +[sig-storage] Projected downwardAPI +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":273,"skipped":5180,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:49:00.335: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-c1820af1-fa06-443c-9b34-6482340bf0dd +STEP: Creating a pod to test consume configMaps +Aug 3 07:49:00.533: INFO: Waiting up to 5m0s for pod "pod-configmaps-adc89909-5492-46a1-9296-bf86260aaf59" in namespace "configmap-4384" to be "Succeeded or Failed" +Aug 3 07:49:00.540: INFO: Pod "pod-configmaps-adc89909-5492-46a1-9296-bf86260aaf59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.750351ms +Aug 3 07:49:02.552: INFO: Pod "pod-configmaps-adc89909-5492-46a1-9296-bf86260aaf59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018758886s +Aug 3 07:49:04.561: INFO: Pod "pod-configmaps-adc89909-5492-46a1-9296-bf86260aaf59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027743097s +Aug 3 07:49:06.579: INFO: Pod "pod-configmaps-adc89909-5492-46a1-9296-bf86260aaf59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04660669s +STEP: Saw pod success +Aug 3 07:49:06.580: INFO: Pod "pod-configmaps-adc89909-5492-46a1-9296-bf86260aaf59" satisfied condition "Succeeded or Failed" +Aug 3 07:49:06.585: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-configmaps-adc89909-5492-46a1-9296-bf86260aaf59 container agnhost-container: +STEP: delete the pod +Aug 3 07:49:06.668: INFO: Waiting for pod pod-configmaps-adc89909-5492-46a1-9296-bf86260aaf59 to disappear +Aug 3 07:49:06.676: INFO: Pod pod-configmaps-adc89909-5492-46a1-9296-bf86260aaf59 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:49:06.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-4384" for this suite. + +• [SLOW TEST:6.363 seconds] +[sig-storage] ConfigMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":274,"skipped":5192,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl patch + should add annotations for pods in rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:49:06.701: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should add annotations for pods in rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating Agnhost RC +Aug 3 07:49:06.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-7374 create -f -' +Aug 3 07:49:07.825: INFO: stderr: "" +Aug 3 07:49:07.825: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Aug 3 07:49:08.842: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 3 07:49:08.842: INFO: Found 0 / 1 +Aug 3 07:49:09.833: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 3 07:49:09.833: INFO: Found 0 / 1 +Aug 3 07:49:10.840: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 3 07:49:10.840: INFO: Found 0 / 1 +Aug 3 07:49:11.850: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 3 07:49:11.850: INFO: Found 0 / 1 +Aug 3 07:49:12.837: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 3 07:49:12.837: INFO: Found 1 / 1 +Aug 3 07:49:12.837: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +STEP: patching all pods +Aug 3 07:49:12.844: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 3 07:49:12.844: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Aug 3 07:49:12.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-7374 patch pod agnhost-primary-frdlt -p {"metadata":{"annotations":{"x":"y"}}}' +Aug 3 07:49:12.998: INFO: stderr: "" +Aug 3 07:49:12.998: INFO: stdout: "pod/agnhost-primary-frdlt patched\n" +STEP: checking annotations +Aug 3 07:49:13.004: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 3 07:49:13.004: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:49:13.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7374" for this suite. + +• [SLOW TEST:6.322 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Kubectl patch + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1483 + should add annotations for pods in rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":346,"completed":275,"skipped":5279,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:49:13.025: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on node default medium +Aug 3 07:49:13.105: INFO: Waiting up to 5m0s for pod "pod-2982545d-d150-4782-a71a-d76e9c235433" in namespace "emptydir-8559" to be "Succeeded or Failed" +Aug 3 07:49:13.113: INFO: Pod "pod-2982545d-d150-4782-a71a-d76e9c235433": Phase="Pending", Reason="", readiness=false. Elapsed: 8.196309ms +Aug 3 07:49:15.124: INFO: Pod "pod-2982545d-d150-4782-a71a-d76e9c235433": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019620207s +Aug 3 07:49:17.139: INFO: Pod "pod-2982545d-d150-4782-a71a-d76e9c235433": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033991359s +Aug 3 07:49:19.158: INFO: Pod "pod-2982545d-d150-4782-a71a-d76e9c235433": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053326818s +STEP: Saw pod success +Aug 3 07:49:19.158: INFO: Pod "pod-2982545d-d150-4782-a71a-d76e9c235433" satisfied condition "Succeeded or Failed" +Aug 3 07:49:19.166: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-2982545d-d150-4782-a71a-d76e9c235433 container test-container: +STEP: delete the pod +Aug 3 07:49:19.211: INFO: Waiting for pod pod-2982545d-d150-4782-a71a-d76e9c235433 to disappear +Aug 3 07:49:19.217: INFO: Pod pod-2982545d-d150-4782-a71a-d76e9c235433 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:49:19.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-8559" for this suite. + +• [SLOW TEST:6.215 seconds] +[sig-storage] EmptyDir volumes +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":276,"skipped":5380,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:49:19.240: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-dc3b90bf-41cb-46dc-9064-3bbc85f61d20 +STEP: Creating a pod to test consume configMaps +Aug 3 07:49:19.337: INFO: Waiting up to 5m0s for pod "pod-configmaps-84d201a4-bc53-4693-9d32-b76b26e1fd28" in namespace "configmap-6130" to be "Succeeded or Failed" +Aug 3 07:49:19.347: INFO: Pod "pod-configmaps-84d201a4-bc53-4693-9d32-b76b26e1fd28": Phase="Pending", Reason="", readiness=false. Elapsed: 10.178811ms +Aug 3 07:49:21.362: INFO: Pod "pod-configmaps-84d201a4-bc53-4693-9d32-b76b26e1fd28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025261129s +Aug 3 07:49:23.387: INFO: Pod "pod-configmaps-84d201a4-bc53-4693-9d32-b76b26e1fd28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04989022s +STEP: Saw pod success +Aug 3 07:49:23.387: INFO: Pod "pod-configmaps-84d201a4-bc53-4693-9d32-b76b26e1fd28" satisfied condition "Succeeded or Failed" +Aug 3 07:49:23.409: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-configmaps-84d201a4-bc53-4693-9d32-b76b26e1fd28 container configmap-volume-test: +STEP: delete the pod +Aug 3 07:49:23.533: INFO: Waiting for pod pod-configmaps-84d201a4-bc53-4693-9d32-b76b26e1fd28 to disappear +Aug 3 07:49:23.557: INFO: Pod pod-configmaps-84d201a4-bc53-4693-9d32-b76b26e1fd28 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:49:23.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-6130" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":346,"completed":277,"skipped":5392,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should get a host IP [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:49:23.645: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 +[It] should get a host IP [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating pod +Aug 3 07:49:23.932: INFO: The status of Pod pod-hostip-e8f0ec6a-16af-4ec7-a0cd-9410717f1175 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:49:25.976: INFO: The status of Pod pod-hostip-e8f0ec6a-16af-4ec7-a0cd-9410717f1175 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:49:27.946: INFO: The status of Pod pod-hostip-e8f0ec6a-16af-4ec7-a0cd-9410717f1175 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:49:29.940: INFO: The status of Pod pod-hostip-e8f0ec6a-16af-4ec7-a0cd-9410717f1175 is Running (Ready = true) +Aug 3 07:49:29.954: INFO: Pod pod-hostip-e8f0ec6a-16af-4ec7-a0cd-9410717f1175 has hostIP: 10.6.213.50 +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:49:29.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-5851" for this suite. + +• [SLOW TEST:6.331 seconds] +[sig-node] Pods +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should get a host IP [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":346,"completed":278,"skipped":5414,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:49:29.976: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on node default medium +Aug 3 07:49:30.056: INFO: Waiting up to 5m0s for pod "pod-3d98475b-b4a2-487d-bcaa-c40040b3c8db" in namespace "emptydir-1631" to be "Succeeded or Failed" +Aug 3 07:49:30.064: INFO: Pod "pod-3d98475b-b4a2-487d-bcaa-c40040b3c8db": Phase="Pending", Reason="", readiness=false. Elapsed: 8.495507ms +Aug 3 07:49:32.076: INFO: Pod "pod-3d98475b-b4a2-487d-bcaa-c40040b3c8db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020084086s +Aug 3 07:49:34.099: INFO: Pod "pod-3d98475b-b4a2-487d-bcaa-c40040b3c8db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04351367s +Aug 3 07:49:36.113: INFO: Pod "pod-3d98475b-b4a2-487d-bcaa-c40040b3c8db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056632429s +STEP: Saw pod success +Aug 3 07:49:36.113: INFO: Pod "pod-3d98475b-b4a2-487d-bcaa-c40040b3c8db" satisfied condition "Succeeded or Failed" +Aug 3 07:49:36.122: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-3d98475b-b4a2-487d-bcaa-c40040b3c8db container test-container: +STEP: delete the pod +Aug 3 07:49:36.162: INFO: Waiting for pod pod-3d98475b-b4a2-487d-bcaa-c40040b3c8db to disappear +Aug 3 07:49:36.169: INFO: Pod pod-3d98475b-b4a2-487d-bcaa-c40040b3c8db no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:49:36.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1631" for this suite. + +• [SLOW TEST:6.219 seconds] +[sig-storage] EmptyDir volumes +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":279,"skipped":5455,"failed":0} +[sig-api-machinery] Garbage collector + should not be blocked by dependency circle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:49:36.196: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be blocked by dependency circle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:49:36.374: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"d94e1544-456a-4559-8a18-31f5ae00fbd9", Controller:(*bool)(0xc0057f54fa), BlockOwnerDeletion:(*bool)(0xc0057f54fb)}} +Aug 3 07:49:36.399: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f2c0a613-79a5-4229-8cf1-23ff4e041c00", Controller:(*bool)(0xc0057f5782), BlockOwnerDeletion:(*bool)(0xc0057f5783)}} +Aug 3 07:49:36.425: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"b71639be-7cfe-4d44-999c-4ada62311a90", Controller:(*bool)(0xc0057f5a32), BlockOwnerDeletion:(*bool)(0xc0057f5a33)}} +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:49:41.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-4401" for this suite. + +• [SLOW TEST:5.300 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should not be blocked by dependency circle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":346,"completed":280,"skipped":5455,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:49:41.499: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 3 07:49:42.418: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Aug 3 07:49:44.452: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 49, 42, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 49, 42, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 49, 42, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 49, 42, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 07:49:46.476: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 49, 42, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 49, 42, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 49, 42, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 49, 42, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 3 07:49:49.492: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:49:49.500: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Registering the custom resource webhook via the AdmissionRegistration API +STEP: Creating a custom resource that should be denied by the webhook +STEP: Creating a custom resource whose deletion would be denied by the webhook +STEP: Updating the custom resource with disallowed data should be denied +STEP: Deleting the custom resource should be denied +STEP: Remove the offending key and value from the custom resource data +STEP: Deleting the updated custom resource should be successful +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:49:52.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-5246" for this suite. +STEP: Destroying namespace "webhook-5246-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:11.338 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":346,"completed":281,"skipped":5479,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Discovery + should validate PreferredVersion for each APIGroup [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:49:52.839: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename discovery +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 +STEP: Setting up server cert +[It] should validate PreferredVersion for each APIGroup [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:49:53.260: INFO: Checking APIGroup: apiregistration.k8s.io +Aug 3 07:49:53.262: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 +Aug 3 07:49:53.262: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] +Aug 3 07:49:53.262: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 +Aug 3 07:49:53.262: INFO: Checking APIGroup: apps +Aug 3 07:49:53.265: INFO: PreferredVersion.GroupVersion: apps/v1 +Aug 3 07:49:53.265: INFO: Versions found [{apps/v1 v1}] +Aug 3 07:49:53.265: INFO: apps/v1 matches apps/v1 +Aug 3 07:49:53.265: INFO: Checking APIGroup: events.k8s.io +Aug 3 07:49:53.272: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 +Aug 3 07:49:53.272: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] +Aug 3 07:49:53.272: INFO: events.k8s.io/v1 matches events.k8s.io/v1 +Aug 3 07:49:53.272: INFO: Checking APIGroup: authentication.k8s.io +Aug 3 07:49:53.274: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 +Aug 3 07:49:53.274: INFO: Versions found [{authentication.k8s.io/v1 v1}] +Aug 3 07:49:53.274: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 +Aug 3 07:49:53.275: INFO: Checking APIGroup: authorization.k8s.io +Aug 3 07:49:53.279: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 +Aug 3 07:49:53.279: INFO: Versions found [{authorization.k8s.io/v1 v1}] +Aug 3 07:49:53.279: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 +Aug 3 07:49:53.279: INFO: Checking APIGroup: autoscaling +Aug 3 07:49:53.281: INFO: PreferredVersion.GroupVersion: autoscaling/v2 +Aug 3 07:49:53.281: INFO: Versions found [{autoscaling/v2 v2} {autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] +Aug 3 07:49:53.281: INFO: autoscaling/v2 matches autoscaling/v2 +Aug 3 07:49:53.281: INFO: Checking APIGroup: batch +Aug 3 07:49:53.286: INFO: PreferredVersion.GroupVersion: batch/v1 +Aug 3 07:49:53.286: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] +Aug 3 07:49:53.286: INFO: batch/v1 matches batch/v1 +Aug 3 07:49:53.286: INFO: Checking APIGroup: certificates.k8s.io +Aug 3 07:49:53.288: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 +Aug 3 07:49:53.288: INFO: Versions found [{certificates.k8s.io/v1 v1}] +Aug 3 07:49:53.288: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 +Aug 3 07:49:53.288: INFO: Checking APIGroup: networking.k8s.io +Aug 3 07:49:53.292: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 +Aug 3 07:49:53.292: INFO: Versions found [{networking.k8s.io/v1 v1}] +Aug 3 07:49:53.292: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 +Aug 3 07:49:53.292: INFO: Checking APIGroup: policy +Aug 3 07:49:53.294: INFO: PreferredVersion.GroupVersion: policy/v1 +Aug 3 07:49:53.294: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] +Aug 3 07:49:53.294: INFO: policy/v1 matches policy/v1 +Aug 3 07:49:53.294: INFO: Checking APIGroup: rbac.authorization.k8s.io +Aug 3 07:49:53.296: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 +Aug 3 07:49:53.296: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] +Aug 3 07:49:53.296: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 +Aug 3 07:49:53.296: INFO: Checking APIGroup: storage.k8s.io +Aug 3 07:49:53.298: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 +Aug 3 07:49:53.298: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] +Aug 3 07:49:53.298: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 +Aug 3 07:49:53.298: INFO: Checking APIGroup: admissionregistration.k8s.io +Aug 3 07:49:53.300: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 +Aug 3 07:49:53.300: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] +Aug 3 07:49:53.300: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 +Aug 3 07:49:53.300: INFO: Checking APIGroup: apiextensions.k8s.io +Aug 3 07:49:53.302: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 +Aug 3 07:49:53.302: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] +Aug 3 07:49:53.302: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 +Aug 3 07:49:53.302: INFO: Checking APIGroup: scheduling.k8s.io +Aug 3 07:49:53.304: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 +Aug 3 07:49:53.304: INFO: Versions found [{scheduling.k8s.io/v1 v1}] +Aug 3 07:49:53.304: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 +Aug 3 07:49:53.304: INFO: Checking APIGroup: coordination.k8s.io +Aug 3 07:49:53.306: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 +Aug 3 07:49:53.306: INFO: Versions found [{coordination.k8s.io/v1 v1}] +Aug 3 07:49:53.306: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 +Aug 3 07:49:53.306: INFO: Checking APIGroup: node.k8s.io +Aug 3 07:49:53.307: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 +Aug 3 07:49:53.307: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] +Aug 3 07:49:53.307: INFO: node.k8s.io/v1 matches node.k8s.io/v1 +Aug 3 07:49:53.307: INFO: Checking APIGroup: discovery.k8s.io +Aug 3 07:49:53.310: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 +Aug 3 07:49:53.310: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] +Aug 3 07:49:53.310: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 +Aug 3 07:49:53.310: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io +Aug 3 07:49:53.311: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta2 +Aug 3 07:49:53.311: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta2 v1beta2} {flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] +Aug 3 07:49:53.311: INFO: flowcontrol.apiserver.k8s.io/v1beta2 matches flowcontrol.apiserver.k8s.io/v1beta2 +Aug 3 07:49:53.311: INFO: Checking APIGroup: uds.dce.daocloud.io +Aug 3 07:49:53.314: INFO: PreferredVersion.GroupVersion: uds.dce.daocloud.io/v1 +Aug 3 07:49:53.314: INFO: Versions found [{uds.dce.daocloud.io/v1 v1} {uds.dce.daocloud.io/v1alpha1 v1alpha1}] +Aug 3 07:49:53.314: INFO: uds.dce.daocloud.io/v1 matches uds.dce.daocloud.io/v1 +Aug 3 07:49:53.314: INFO: Checking APIGroup: dce.daocloud.io +Aug 3 07:49:53.318: INFO: PreferredVersion.GroupVersion: dce.daocloud.io/v1beta1 +Aug 3 07:49:53.318: INFO: Versions found [{dce.daocloud.io/v1beta1 v1beta1}] +Aug 3 07:49:53.318: INFO: dce.daocloud.io/v1beta1 matches dce.daocloud.io/v1beta1 +Aug 3 07:49:53.318: INFO: Checking APIGroup: snapshot.storage.k8s.io +Aug 3 07:49:53.324: INFO: PreferredVersion.GroupVersion: snapshot.storage.k8s.io/v1beta1 +Aug 3 07:49:53.324: INFO: Versions found [{snapshot.storage.k8s.io/v1beta1 v1beta1}] +Aug 3 07:49:53.324: INFO: snapshot.storage.k8s.io/v1beta1 matches snapshot.storage.k8s.io/v1beta1 +[AfterEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:49:53.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "discovery-9199" for this suite. +•{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":346,"completed":282,"skipped":5528,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:49:53.343: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Aug 3 07:49:53.415: INFO: Waiting up to 5m0s for pod "pod-4443d9f8-7b68-4833-ad8c-66811189f040" in namespace "emptydir-8752" to be "Succeeded or Failed" +Aug 3 07:49:53.421: INFO: Pod "pod-4443d9f8-7b68-4833-ad8c-66811189f040": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087231ms +Aug 3 07:49:55.432: INFO: Pod "pod-4443d9f8-7b68-4833-ad8c-66811189f040": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017497732s +Aug 3 07:49:57.451: INFO: Pod "pod-4443d9f8-7b68-4833-ad8c-66811189f040": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036621821s +STEP: Saw pod success +Aug 3 07:49:57.451: INFO: Pod "pod-4443d9f8-7b68-4833-ad8c-66811189f040" satisfied condition "Succeeded or Failed" +Aug 3 07:49:57.459: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-4443d9f8-7b68-4833-ad8c-66811189f040 container test-container: +STEP: delete the pod +Aug 3 07:49:57.501: INFO: Waiting for pod pod-4443d9f8-7b68-4833-ad8c-66811189f040 to disappear +Aug 3 07:49:57.506: INFO: Pod pod-4443d9f8-7b68-4833-ad8c-66811189f040 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:49:57.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-8752" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":283,"skipped":5545,"failed":0} +SSSS +------------------------------ +[sig-node] Pods + should contain environment variables for services [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:49:57.539: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 +[It] should contain environment variables for services [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:49:57.638: INFO: The status of Pod server-envvars-aa98a80e-c572-4b43-ad9e-76a16fee9765 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:49:59.646: INFO: The status of Pod server-envvars-aa98a80e-c572-4b43-ad9e-76a16fee9765 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:50:01.653: INFO: The status of Pod server-envvars-aa98a80e-c572-4b43-ad9e-76a16fee9765 is Running (Ready = true) +Aug 3 07:50:01.707: INFO: Waiting up to 5m0s for pod "client-envvars-f2197a9c-4780-413b-9a06-72392110dc59" in namespace "pods-4393" to be "Succeeded or Failed" +Aug 3 07:50:01.722: INFO: Pod "client-envvars-f2197a9c-4780-413b-9a06-72392110dc59": Phase="Pending", Reason="", readiness=false. Elapsed: 15.116734ms +Aug 3 07:50:03.732: INFO: Pod "client-envvars-f2197a9c-4780-413b-9a06-72392110dc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025120736s +Aug 3 07:50:05.747: INFO: Pod "client-envvars-f2197a9c-4780-413b-9a06-72392110dc59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039820867s +STEP: Saw pod success +Aug 3 07:50:05.747: INFO: Pod "client-envvars-f2197a9c-4780-413b-9a06-72392110dc59" satisfied condition "Succeeded or Failed" +Aug 3 07:50:05.756: INFO: Trying to get logs from node dce-10-6-213-50 pod client-envvars-f2197a9c-4780-413b-9a06-72392110dc59 container env3cont: +STEP: delete the pod +Aug 3 07:50:05.788: INFO: Waiting for pod client-envvars-f2197a9c-4780-413b-9a06-72392110dc59 to disappear +Aug 3 07:50:05.796: INFO: Pod client-envvars-f2197a9c-4780-413b-9a06-72392110dc59 no longer exists +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:50:05.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-4393" for this suite. + +• [SLOW TEST:8.281 seconds] +[sig-node] Pods +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should contain environment variables for services [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":346,"completed":284,"skipped":5549,"failed":0} +SSSS +------------------------------ +[sig-network] Services + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:50:05.820: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-6170 +STEP: creating service affinity-nodeport in namespace services-6170 +STEP: creating replication controller affinity-nodeport in namespace services-6170 +I0803 07:50:05.956968 21 runners.go:193] Created replication controller with name: affinity-nodeport, namespace: services-6170, replica count: 3 +I0803 07:50:09.008567 21 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0803 07:50:12.010272 21 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 3 07:50:12.038: INFO: Creating new exec pod +Aug 3 07:50:19.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-6170 exec execpod-affinityzpbkd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' +Aug 3 07:50:19.387: INFO: rc: 1 +Aug 3 07:50:19.387: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-6170 exec execpod-affinityzpbkd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80: +Command stdout: + +stderr: ++ echo hostName ++ nc -v -t -w 2 affinity-nodeport 80 +nc: connect to affinity-nodeport port 80 (tcp) failed: Connection refused +command terminated with exit code 1 + +error: +exit status 1 +Retrying... +Aug 3 07:50:20.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-6170 exec execpod-affinityzpbkd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' +Aug 3 07:50:20.676: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" +Aug 3 07:50:20.676: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 07:50:20.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-6170 exec execpod-affinityzpbkd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.92.5 80' +Aug 3 07:50:20.988: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.92.5 80\nConnection to 172.31.92.5 80 port [tcp/http] succeeded!\n" +Aug 3 07:50:20.988: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 07:50:20.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-6170 exec execpod-affinityzpbkd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.6.213.40 31757' +Aug 3 07:50:21.281: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.6.213.40 31757\nConnection to 10.6.213.40 31757 port [tcp/*] succeeded!\n" +Aug 3 07:50:21.281: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 07:50:21.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-6170 exec execpod-affinityzpbkd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.6.213.50 31757' +Aug 3 07:50:21.608: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.6.213.50 31757\nConnection to 10.6.213.50 31757 port [tcp/*] succeeded!\n" +Aug 3 07:50:21.609: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 07:50:21.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-6170 exec execpod-affinityzpbkd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.6.213.40:31757/ ; done' +Aug 3 07:50:22.098: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:31757/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:31757/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:31757/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:31757/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:31757/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:31757/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:31757/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:31757/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:31757/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:31757/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:31757/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:31757/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:31757/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:31757/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:31757/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.6.213.40:31757/\n" +Aug 3 07:50:22.098: INFO: stdout: "\naffinity-nodeport-tjvlt\naffinity-nodeport-tjvlt\naffinity-nodeport-tjvlt\naffinity-nodeport-tjvlt\naffinity-nodeport-tjvlt\naffinity-nodeport-tjvlt\naffinity-nodeport-tjvlt\naffinity-nodeport-tjvlt\naffinity-nodeport-tjvlt\naffinity-nodeport-tjvlt\naffinity-nodeport-tjvlt\naffinity-nodeport-tjvlt\naffinity-nodeport-tjvlt\naffinity-nodeport-tjvlt\naffinity-nodeport-tjvlt\naffinity-nodeport-tjvlt" +Aug 3 07:50:22.098: INFO: Received response from host: affinity-nodeport-tjvlt +Aug 3 07:50:22.098: INFO: Received response from host: affinity-nodeport-tjvlt +Aug 3 07:50:22.098: INFO: Received response from host: affinity-nodeport-tjvlt +Aug 3 07:50:22.098: INFO: Received response from host: affinity-nodeport-tjvlt +Aug 3 07:50:22.098: INFO: Received response from host: affinity-nodeport-tjvlt +Aug 3 07:50:22.098: INFO: Received response from host: affinity-nodeport-tjvlt +Aug 3 07:50:22.098: INFO: Received response from host: affinity-nodeport-tjvlt +Aug 3 07:50:22.098: INFO: Received response from host: affinity-nodeport-tjvlt +Aug 3 07:50:22.098: INFO: Received response from host: affinity-nodeport-tjvlt +Aug 3 07:50:22.098: INFO: Received response from host: affinity-nodeport-tjvlt +Aug 3 07:50:22.098: INFO: Received response from host: affinity-nodeport-tjvlt +Aug 3 07:50:22.098: INFO: Received response from host: affinity-nodeport-tjvlt +Aug 3 07:50:22.098: INFO: Received response from host: affinity-nodeport-tjvlt +Aug 3 07:50:22.098: INFO: Received response from host: affinity-nodeport-tjvlt +Aug 3 07:50:22.098: INFO: Received response from host: affinity-nodeport-tjvlt +Aug 3 07:50:22.098: INFO: Received response from host: affinity-nodeport-tjvlt +Aug 3 07:50:22.098: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport in namespace services-6170, will wait for the garbage collector to delete the pods +Aug 3 07:50:22.230: INFO: Deleting ReplicationController affinity-nodeport took: 55.096685ms +Aug 3 07:50:22.331: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.971727ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:50:26.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6170" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:20.939 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":285,"skipped":5553,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:50:26.759: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation +Aug 3 07:50:27.696: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 07:50:31.643: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:50:47.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-7255" for this suite. + +• [SLOW TEST:21.033 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":346,"completed":286,"skipped":5558,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:50:47.794: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap configmap-7612/configmap-test-2d13f988-8182-4fff-9cf6-925fbe0a7677 +STEP: Creating a pod to test consume configMaps +Aug 3 07:50:47.874: INFO: Waiting up to 5m0s for pod "pod-configmaps-0ad04213-c9ab-4d5b-9e7c-e8984ca79807" in namespace "configmap-7612" to be "Succeeded or Failed" +Aug 3 07:50:47.880: INFO: Pod "pod-configmaps-0ad04213-c9ab-4d5b-9e7c-e8984ca79807": Phase="Pending", Reason="", readiness=false. Elapsed: 5.766672ms +Aug 3 07:50:49.888: INFO: Pod "pod-configmaps-0ad04213-c9ab-4d5b-9e7c-e8984ca79807": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013449713s +Aug 3 07:50:51.907: INFO: Pod "pod-configmaps-0ad04213-c9ab-4d5b-9e7c-e8984ca79807": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03260836s +Aug 3 07:50:53.917: INFO: Pod "pod-configmaps-0ad04213-c9ab-4d5b-9e7c-e8984ca79807": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042493685s +STEP: Saw pod success +Aug 3 07:50:53.917: INFO: Pod "pod-configmaps-0ad04213-c9ab-4d5b-9e7c-e8984ca79807" satisfied condition "Succeeded or Failed" +Aug 3 07:50:53.922: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-configmaps-0ad04213-c9ab-4d5b-9e7c-e8984ca79807 container env-test: +STEP: delete the pod +Aug 3 07:50:53.956: INFO: Waiting for pod pod-configmaps-0ad04213-c9ab-4d5b-9e7c-e8984ca79807 to disappear +Aug 3 07:50:53.961: INFO: Pod pod-configmaps-0ad04213-c9ab-4d5b-9e7c-e8984ca79807 no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:50:53.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-7612" for this suite. + +• [SLOW TEST:6.192 seconds] +[sig-node] ConfigMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":346,"completed":287,"skipped":5666,"failed":0} +SSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:50:53.986: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 +[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod busybox-7e6dc29c-dc41-45f4-b151-33ece4a59899 in namespace container-probe-9081 +Aug 3 07:51:00.098: INFO: Started pod busybox-7e6dc29c-dc41-45f4-b151-33ece4a59899 in namespace container-probe-9081 +STEP: checking the pod's current state and verifying that restartCount is present +Aug 3 07:51:00.104: INFO: Initial restart count of pod busybox-7e6dc29c-dc41-45f4-b151-33ece4a59899 is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:55:00.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-9081" for this suite. + +• [SLOW TEST:246.547 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":288,"skipped":5669,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:55:00.534: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Aug 3 07:55:00.596: INFO: PodSpec: initContainers in spec.initContainers +Aug 3 07:55:47.380: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-8da0f07d-ef52-4fb5-bb29-b8034688524d", GenerateName:"", Namespace:"init-container-3069", SelfLink:"", UID:"a1de5d8a-bccb-42cc-858b-b86ba4aa858c", ResourceVersion:"639222", Generation:0, CreationTimestamp:time.Date(2022, time.August, 3, 7, 55, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"596331810"}, Annotations:map[string]string{"cni.projectcalico.org/ipv4pools":"[\"default-ipv4-ippool\"]", "dce.daocloud.io/parcel.egress.burst":"0", "dce.daocloud.io/parcel.egress.rate":"0", "dce.daocloud.io/parcel.ingress.burst":"0", "dce.daocloud.io/parcel.ingress.rate":"0"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-njgv4", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00c3ca0a0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-njgv4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-njgv4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-njgv4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0049ab6e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"dce-10-6-213-50", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00259c930), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0049ab770)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0049ab790)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0049ab798), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0049ab79c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0046708d0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.August, 3, 7, 55, 0, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.August, 3, 7, 55, 0, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.August, 3, 7, 55, 0, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.August, 3, 7, 55, 0, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.6.213.50", PodIP:"172.29.175.48", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.29.175.48"}}, StartTime:time.Date(2022, time.August, 3, 7, 55, 0, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00259ca10)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00259ca80)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"docker-pullable://k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf", ContainerID:"docker://d43e140ffa4965550d9c7548a0958faab6e45140771652c42d3dc3a4f0f89125", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00c3ca160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00c3ca120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.6", ImageID:"", ContainerID:"", Started:(*bool)(0xc0049ab81f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:55:47.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-3069" for this suite. + +• [SLOW TEST:46.876 seconds] +[sig-node] InitContainer [NodeConformance] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":346,"completed":289,"skipped":5684,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:55:47.410: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir volume type on tmpfs +Aug 3 07:55:47.502: INFO: Waiting up to 5m0s for pod "pod-7aa3c735-8d36-4165-91d7-1dae2a8b8225" in namespace "emptydir-1348" to be "Succeeded or Failed" +Aug 3 07:55:47.510: INFO: Pod "pod-7aa3c735-8d36-4165-91d7-1dae2a8b8225": Phase="Pending", Reason="", readiness=false. Elapsed: 8.00765ms +Aug 3 07:55:49.519: INFO: Pod "pod-7aa3c735-8d36-4165-91d7-1dae2a8b8225": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017190899s +Aug 3 07:55:51.532: INFO: Pod "pod-7aa3c735-8d36-4165-91d7-1dae2a8b8225": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030666544s +STEP: Saw pod success +Aug 3 07:55:51.532: INFO: Pod "pod-7aa3c735-8d36-4165-91d7-1dae2a8b8225" satisfied condition "Succeeded or Failed" +Aug 3 07:55:51.538: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-7aa3c735-8d36-4165-91d7-1dae2a8b8225 container test-container: +STEP: delete the pod +Aug 3 07:55:51.605: INFO: Waiting for pod pod-7aa3c735-8d36-4165-91d7-1dae2a8b8225 to disappear +Aug 3 07:55:51.610: INFO: Pod pod-7aa3c735-8d36-4165-91d7-1dae2a8b8225 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:55:51.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1348" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":290,"skipped":5724,"failed":0} +SSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:55:51.630: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 +STEP: Creating service test in namespace statefulset-9953 +[It] should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a new StatefulSet +Aug 3 07:55:51.729: INFO: Found 0 stateful pods, waiting for 3 +Aug 3 07:56:01.757: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Aug 3 07:56:01.757: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Aug 3 07:56:01.757: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false +Aug 3 07:56:11.747: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Aug 3 07:56:11.747: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Aug 3 07:56:11.747: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 +Aug 3 07:56:11.801: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Not applying an update when the partition is greater than the number of replicas +STEP: Performing a canary update +Aug 3 07:56:21.870: INFO: Updating stateful set ss2 +Aug 3 07:56:21.882: INFO: Waiting for Pod statefulset-9953/ss2-2 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb +STEP: Restoring Pods to the correct revision when they are deleted +Aug 3 07:56:32.029: INFO: Found 2 stateful pods, waiting for 3 +Aug 3 07:56:42.047: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Aug 3 07:56:42.047: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Aug 3 07:56:42.047: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Performing a phased rolling update +Aug 3 07:56:42.089: INFO: Updating stateful set ss2 +Aug 3 07:56:42.100: INFO: Waiting for Pod statefulset-9953/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb +Aug 3 07:56:52.153: INFO: Updating stateful set ss2 +Aug 3 07:56:52.167: INFO: Waiting for StatefulSet statefulset-9953/ss2 to complete update +Aug 3 07:56:52.167: INFO: Waiting for Pod statefulset-9953/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 +Aug 3 07:57:02.190: INFO: Deleting all statefulset in ns statefulset-9953 +Aug 3 07:57:02.195: INFO: Scaling statefulset ss2 to 0 +Aug 3 07:57:12.243: INFO: Waiting for statefulset status.replicas updated to 0 +Aug 3 07:57:12.251: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:57:12.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-9953" for this suite. + +• [SLOW TEST:80.678 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":346,"completed":291,"skipped":5727,"failed":0} +SS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert a non homogeneous list of CRs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:57:12.308: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename crd-webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 +STEP: Setting up server cert +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication +STEP: Deploying the custom resource conversion webhook pod +STEP: Wait for the deployment to be ready +Aug 3 07:57:12.929: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set +Aug 3 07:57:14.955: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 57, 12, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 57, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 57, 12, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 57, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-bb9577b7b\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 07:57:16.971: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 57, 12, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 57, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 57, 12, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 57, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-bb9577b7b\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 3 07:57:19.995: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert a non homogeneous list of CRs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 07:57:20.005: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Creating a v1 custom resource +STEP: Create a v2 custom resource +STEP: List CRs in v1 +STEP: List CRs in v2 +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:57:23.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-webhook-8066" for this suite. +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 + +• [SLOW TEST:11.099 seconds] +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to convert a non homogeneous list of CRs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":346,"completed":292,"skipped":5729,"failed":0} +SSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl run pod + should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:57:23.407: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1537 +[It] should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 +Aug 3 07:57:23.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-8955 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2' +Aug 3 07:57:23.773: INFO: stderr: "" +Aug 3 07:57:23.773: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod was created +[AfterEach] Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541 +Aug 3 07:57:23.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-8955 delete pods e2e-test-httpd-pod' +Aug 3 07:57:30.328: INFO: stderr: "" +Aug 3 07:57:30.328: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:57:30.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8955" for this suite. + +• [SLOW TEST:6.947 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1534 + should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":346,"completed":293,"skipped":5735,"failed":0} +SS +------------------------------ +[sig-storage] Secrets + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:57:30.355: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name s-test-opt-del-9c632e11-7a20-44c1-878f-efbe28078240 +STEP: Creating secret with name s-test-opt-upd-418ec3a2-c02c-4257-9f8c-68acd144389f +STEP: Creating the pod +Aug 3 07:57:30.464: INFO: The status of Pod pod-secrets-3e507ebc-e582-46f6-a663-7f1608667355 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:57:32.480: INFO: The status of Pod pod-secrets-3e507ebc-e582-46f6-a663-7f1608667355 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:57:34.480: INFO: The status of Pod pod-secrets-3e507ebc-e582-46f6-a663-7f1608667355 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:57:36.481: INFO: The status of Pod pod-secrets-3e507ebc-e582-46f6-a663-7f1608667355 is Running (Ready = true) +STEP: Deleting secret s-test-opt-del-9c632e11-7a20-44c1-878f-efbe28078240 +STEP: Updating secret s-test-opt-upd-418ec3a2-c02c-4257-9f8c-68acd144389f +STEP: Creating secret with name s-test-opt-create-01dacccd-caa0-4be2-a29c-5269cdf18286 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:58:39.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-6845" for this suite. + +• [SLOW TEST:68.883 seconds] +[sig-storage] Secrets +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":294,"skipped":5737,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:58:39.238: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-698 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Aug 3 07:58:39.302: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Aug 3 07:58:39.356: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:58:41.366: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Aug 3 07:58:43.366: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 07:58:45.367: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 07:58:47.369: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 07:58:49.373: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 07:58:51.370: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 07:58:53.371: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 07:58:55.370: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 07:58:57.372: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 07:58:59.369: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 3 07:59:01.369: INFO: The status of Pod netserver-0 is Running (Ready = true) +Aug 3 07:59:01.382: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Aug 3 07:59:07.453: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Aug 3 07:59:07.453: INFO: Going to poll 172.29.31.90 on port 8083 at least 0 times, with a maximum of 34 tries before failing +Aug 3 07:59:07.459: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.29.31.90:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-698 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 07:59:07.459: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 07:59:07.460: INFO: ExecWithOptions: Clientset creation +Aug 3 07:59:07.461: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/pod-network-test-698/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F172.29.31.90%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) +Aug 3 07:59:07.659: INFO: Found all 1 expected endpoints: [netserver-0] +Aug 3 07:59:07.659: INFO: Going to poll 172.29.175.13 on port 8083 at least 0 times, with a maximum of 34 tries before failing +Aug 3 07:59:07.665: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.29.175.13:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-698 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 3 07:59:07.665: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +Aug 3 07:59:07.666: INFO: ExecWithOptions: Clientset creation +Aug 3 07:59:07.666: INFO: ExecWithOptions: execute(POST https://172.31.0.1:443/api/v1/namespaces/pod-network-test-698/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F172.29.175.13%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) +Aug 3 07:59:07.836: INFO: Found all 1 expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:59:07.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-698" for this suite. + +• [SLOW TEST:28.627 seconds] +[sig-network] Networking +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 + Granular Checks: Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":295,"skipped":5761,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate configmap [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:59:07.865: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 3 07:59:08.646: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Aug 3 07:59:10.670: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 7, 59, 8, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 59, 8, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 7, 59, 8, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 7, 59, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 3 07:59:13.707: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate configmap [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the mutating configmap webhook via the AdmissionRegistration API +STEP: create a configmap that should be updated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:59:13.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3689" for this suite. +STEP: Destroying namespace "webhook-3689-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:6.122 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should mutate configmap [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":346,"completed":296,"skipped":5783,"failed":0} +SSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:59:13.987: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Aug 3 07:59:14.250: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0bf3fb60-ab6d-493a-b8ea-ab1633342453" in namespace "projected-3744" to be "Succeeded or Failed" +Aug 3 07:59:14.264: INFO: Pod "downwardapi-volume-0bf3fb60-ab6d-493a-b8ea-ab1633342453": Phase="Pending", Reason="", readiness=false. Elapsed: 13.350343ms +Aug 3 07:59:16.289: INFO: Pod "downwardapi-volume-0bf3fb60-ab6d-493a-b8ea-ab1633342453": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038329126s +Aug 3 07:59:18.303: INFO: Pod "downwardapi-volume-0bf3fb60-ab6d-493a-b8ea-ab1633342453": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053008951s +Aug 3 07:59:20.312: INFO: Pod "downwardapi-volume-0bf3fb60-ab6d-493a-b8ea-ab1633342453": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06147537s +STEP: Saw pod success +Aug 3 07:59:20.312: INFO: Pod "downwardapi-volume-0bf3fb60-ab6d-493a-b8ea-ab1633342453" satisfied condition "Succeeded or Failed" +Aug 3 07:59:20.317: INFO: Trying to get logs from node dce-10-6-213-50 pod downwardapi-volume-0bf3fb60-ab6d-493a-b8ea-ab1633342453 container client-container: +STEP: delete the pod +Aug 3 07:59:20.359: INFO: Waiting for pod downwardapi-volume-0bf3fb60-ab6d-493a-b8ea-ab1633342453 to disappear +Aug 3 07:59:20.364: INFO: Pod downwardapi-volume-0bf3fb60-ab6d-493a-b8ea-ab1633342453 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 07:59:20.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3744" for this suite. + +• [SLOW TEST:6.401 seconds] +[sig-storage] Projected downwardAPI +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":297,"skipped":5786,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 07:59:20.389: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 +[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:00:20.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-725" for this suite. + +• [SLOW TEST:60.112 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":346,"completed":298,"skipped":5809,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:00:20.502: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-configmap-7nvl +STEP: Creating a pod to test atomic-volume-subpath +Aug 3 08:00:20.600: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7nvl" in namespace "subpath-4504" to be "Succeeded or Failed" +Aug 3 08:00:20.612: INFO: Pod "pod-subpath-test-configmap-7nvl": Phase="Pending", Reason="", readiness=false. Elapsed: 12.338637ms +Aug 3 08:00:22.630: INFO: Pod "pod-subpath-test-configmap-7nvl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030579963s +Aug 3 08:00:24.637: INFO: Pod "pod-subpath-test-configmap-7nvl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037637272s +Aug 3 08:00:26.652: INFO: Pod "pod-subpath-test-configmap-7nvl": Phase="Running", Reason="", readiness=true. Elapsed: 6.052353969s +Aug 3 08:00:28.664: INFO: Pod "pod-subpath-test-configmap-7nvl": Phase="Running", Reason="", readiness=true. Elapsed: 8.06465907s +Aug 3 08:00:30.678: INFO: Pod "pod-subpath-test-configmap-7nvl": Phase="Running", Reason="", readiness=true. Elapsed: 10.078272227s +Aug 3 08:00:32.691: INFO: Pod "pod-subpath-test-configmap-7nvl": Phase="Running", Reason="", readiness=true. Elapsed: 12.091317628s +Aug 3 08:00:34.699: INFO: Pod "pod-subpath-test-configmap-7nvl": Phase="Running", Reason="", readiness=true. Elapsed: 14.099151091s +Aug 3 08:00:36.711: INFO: Pod "pod-subpath-test-configmap-7nvl": Phase="Running", Reason="", readiness=true. Elapsed: 16.1114816s +Aug 3 08:00:38.723: INFO: Pod "pod-subpath-test-configmap-7nvl": Phase="Running", Reason="", readiness=true. Elapsed: 18.123429473s +Aug 3 08:00:40.734: INFO: Pod "pod-subpath-test-configmap-7nvl": Phase="Running", Reason="", readiness=true. Elapsed: 20.134306227s +Aug 3 08:00:42.750: INFO: Pod "pod-subpath-test-configmap-7nvl": Phase="Running", Reason="", readiness=true. Elapsed: 22.150334273s +Aug 3 08:00:44.763: INFO: Pod "pod-subpath-test-configmap-7nvl": Phase="Running", Reason="", readiness=true. Elapsed: 24.163453762s +Aug 3 08:00:46.785: INFO: Pod "pod-subpath-test-configmap-7nvl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.185396597s +STEP: Saw pod success +Aug 3 08:00:46.785: INFO: Pod "pod-subpath-test-configmap-7nvl" satisfied condition "Succeeded or Failed" +Aug 3 08:00:46.791: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-subpath-test-configmap-7nvl container test-container-subpath-configmap-7nvl: +STEP: delete the pod +Aug 3 08:00:46.853: INFO: Waiting for pod pod-subpath-test-configmap-7nvl to disappear +Aug 3 08:00:46.860: INFO: Pod pod-subpath-test-configmap-7nvl no longer exists +STEP: Deleting pod pod-subpath-test-configmap-7nvl +Aug 3 08:00:46.860: INFO: Deleting pod "pod-subpath-test-configmap-7nvl" in namespace "subpath-4504" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:00:46.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-4504" for this suite. + +• [SLOW TEST:26.410 seconds] +[sig-storage] Subpath +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance]","total":346,"completed":299,"skipped":5820,"failed":0} +SSSSSSS +------------------------------ +[sig-node] NoExecuteTaintManager Single Pod [Serial] + removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:00:46.912: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename taint-single-pod +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:164 +Aug 3 08:00:47.040: INFO: Waiting up to 1m0s for all nodes to be ready +Aug 3 08:01:47.125: INFO: Waiting for terminating namespaces to be deleted... +[It] removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 08:01:47.130: INFO: Starting informer... +STEP: Starting pod... +Aug 3 08:01:47.364: INFO: Pod is running on dce-10-6-213-50. Tainting Node +STEP: Trying to apply a taint on the Node +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting short time to make sure Pod is queued for deletion +Aug 3 08:01:47.385: INFO: Pod wasn't evicted. Proceeding +Aug 3 08:01:47.385: INFO: Removing taint from Node +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting some time to make sure that toleration time passed. +Aug 3 08:03:02.418: INFO: Pod wasn't evicted. Test successful +[AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:03:02.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-single-pod-498" for this suite. + +• [SLOW TEST:135.533 seconds] +[sig-node] NoExecuteTaintManager Single Pod [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 + removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":346,"completed":300,"skipped":5827,"failed":0} +SS +------------------------------ +[sig-apps] ReplicationController + should release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:03:02.446: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Given a ReplicationController is created +STEP: When the matched label of one of its pods change +Aug 3 08:03:02.515: INFO: Pod name pod-release: Found 0 pods out of 1 +Aug 3 08:03:07.529: INFO: Pod name pod-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:03:08.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-1079" for this suite. + +• [SLOW TEST:6.193 seconds] +[sig-apps] ReplicationController +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":346,"completed":301,"skipped":5829,"failed":0} +SSSSSSS +------------------------------ +[sig-node] Pods + should delete a collection of pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:03:08.639: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 +[It] should delete a collection of pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of pods +Aug 3 08:03:08.712: INFO: created test-pod-1 +Aug 3 08:03:14.731: INFO: running and ready test-pod-1 +Aug 3 08:03:14.745: INFO: created test-pod-2 +Aug 3 08:03:18.764: INFO: running and ready test-pod-2 +Aug 3 08:03:18.774: INFO: created test-pod-3 +Aug 3 08:03:24.786: INFO: running and ready test-pod-3 +STEP: waiting for all 3 pods to be located +STEP: waiting for all pods to be deleted +Aug 3 08:03:24.849: INFO: Pod quantity 3 is different from expected quantity 0 +Aug 3 08:03:25.859: INFO: Pod quantity 3 is different from expected quantity 0 +Aug 3 08:03:26.863: INFO: Pod quantity 3 is different from expected quantity 0 +Aug 3 08:03:27.864: INFO: Pod quantity 3 is different from expected quantity 0 +Aug 3 08:03:28.862: INFO: Pod quantity 3 is different from expected quantity 0 +Aug 3 08:03:29.858: INFO: Pod quantity 1 is different from expected quantity 0 +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:03:30.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-8459" for this suite. + +• [SLOW TEST:22.234 seconds] +[sig-node] Pods +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should delete a collection of pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":346,"completed":302,"skipped":5836,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + Replace and Patch tests [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:03:30.873: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +[It] Replace and Patch tests [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 08:03:30.962: INFO: Pod name sample-pod: Found 0 pods out of 1 +Aug 3 08:03:35.972: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: Scaling up "test-rs" replicaset +Aug 3 08:03:35.983: INFO: Updating replica set "test-rs" +STEP: patching the ReplicaSet +Aug 3 08:03:35.996: INFO: observed ReplicaSet test-rs in namespace replicaset-6496 with ReadyReplicas 1, AvailableReplicas 1 +Aug 3 08:03:36.051: INFO: observed ReplicaSet test-rs in namespace replicaset-6496 with ReadyReplicas 1, AvailableReplicas 1 +Aug 3 08:03:36.071: INFO: observed ReplicaSet test-rs in namespace replicaset-6496 with ReadyReplicas 1, AvailableReplicas 1 +Aug 3 08:03:36.081: INFO: observed ReplicaSet test-rs in namespace replicaset-6496 with ReadyReplicas 1, AvailableReplicas 1 +Aug 3 08:03:39.311: INFO: observed ReplicaSet test-rs in namespace replicaset-6496 with ReadyReplicas 2, AvailableReplicas 2 +Aug 3 08:03:40.685: INFO: observed Replicaset test-rs in namespace replicaset-6496 with ReadyReplicas 3 found true +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:03:40.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-6496" for this suite. + +• [SLOW TEST:9.833 seconds] +[sig-apps] ReplicaSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Replace and Patch tests [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":346,"completed":303,"skipped":5847,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:03:40.707: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Aug 3 08:03:40.842: INFO: Waiting up to 5m0s for pod "pod-2f74b8e9-24b6-444e-863a-334d16b72fa8" in namespace "emptydir-9077" to be "Succeeded or Failed" +Aug 3 08:03:40.893: INFO: Pod "pod-2f74b8e9-24b6-444e-863a-334d16b72fa8": Phase="Pending", Reason="", readiness=false. Elapsed: 50.737583ms +Aug 3 08:03:42.902: INFO: Pod "pod-2f74b8e9-24b6-444e-863a-334d16b72fa8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060491621s +Aug 3 08:03:44.912: INFO: Pod "pod-2f74b8e9-24b6-444e-863a-334d16b72fa8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070389908s +Aug 3 08:03:46.942: INFO: Pod "pod-2f74b8e9-24b6-444e-863a-334d16b72fa8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.100095858s +STEP: Saw pod success +Aug 3 08:03:46.942: INFO: Pod "pod-2f74b8e9-24b6-444e-863a-334d16b72fa8" satisfied condition "Succeeded or Failed" +Aug 3 08:03:46.975: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-2f74b8e9-24b6-444e-863a-334d16b72fa8 container test-container: +STEP: delete the pod +Aug 3 08:03:47.073: INFO: Waiting for pod pod-2f74b8e9-24b6-444e-863a-334d16b72fa8 to disappear +Aug 3 08:03:47.083: INFO: Pod pod-2f74b8e9-24b6-444e-863a-334d16b72fa8 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:03:47.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-9077" for this suite. + +• [SLOW TEST:6.470 seconds] +[sig-storage] EmptyDir volumes +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":304,"skipped":5869,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:03:47.178: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Aug 3 08:03:47.409: INFO: Waiting up to 5m0s for pod "downward-api-743217f7-2ec0-4da1-82b6-d806eb4fb010" in namespace "downward-api-7574" to be "Succeeded or Failed" +Aug 3 08:03:47.418: INFO: Pod "downward-api-743217f7-2ec0-4da1-82b6-d806eb4fb010": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050383ms +Aug 3 08:03:49.432: INFO: Pod "downward-api-743217f7-2ec0-4da1-82b6-d806eb4fb010": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022883451s +Aug 3 08:03:51.444: INFO: Pod "downward-api-743217f7-2ec0-4da1-82b6-d806eb4fb010": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034675646s +Aug 3 08:03:53.456: INFO: Pod "downward-api-743217f7-2ec0-4da1-82b6-d806eb4fb010": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046739472s +STEP: Saw pod success +Aug 3 08:03:53.456: INFO: Pod "downward-api-743217f7-2ec0-4da1-82b6-d806eb4fb010" satisfied condition "Succeeded or Failed" +Aug 3 08:03:53.471: INFO: Trying to get logs from node dce-10-6-213-50 pod downward-api-743217f7-2ec0-4da1-82b6-d806eb4fb010 container dapi-container: +STEP: delete the pod +Aug 3 08:03:53.508: INFO: Waiting for pod downward-api-743217f7-2ec0-4da1-82b6-d806eb4fb010 to disappear +Aug 3 08:03:53.512: INFO: Pod downward-api-743217f7-2ec0-4da1-82b6-d806eb4fb010 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:03:53.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7574" for this suite. + +• [SLOW TEST:6.353 seconds] +[sig-node] Downward API +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":346,"completed":305,"skipped":5917,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should delete a job [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:03:53.532: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename job +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a job [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a job +STEP: Ensuring active pods == parallelism +STEP: delete a job +STEP: deleting Job.batch foo in namespace job-5374, will wait for the garbage collector to delete the pods +Aug 3 08:03:59.692: INFO: Deleting Job.batch foo took: 13.243194ms +Aug 3 08:03:59.793: INFO: Terminating Job.batch foo pods took: 101.011238ms +STEP: Ensuring job was deleted +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:04:32.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-5374" for this suite. + +• [SLOW TEST:39.290 seconds] +[sig-apps] Job +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should delete a job [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":346,"completed":306,"skipped":5937,"failed":0} +SSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should create a PodDisruptionBudget [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:04:32.822: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename disruption +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should create a PodDisruptionBudget [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pdb +STEP: Waiting for the pdb to be processed +STEP: updating the pdb +STEP: Waiting for the pdb to be processed +STEP: patching the pdb +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be deleted +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:04:39.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-7634" for this suite. + +• [SLOW TEST:6.240 seconds] +[sig-apps] DisruptionController +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should create a PodDisruptionBudget [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":346,"completed":307,"skipped":5946,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:04:39.063: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating 50 configmaps +STEP: Creating RC which spawns configmap-volume pods +Aug 3 08:04:39.628: INFO: Pod name wrapped-volume-race-44c1169f-0487-480e-bec2-9ac721974cf3: Found 0 pods out of 5 +Aug 3 08:04:44.642: INFO: Pod name wrapped-volume-race-44c1169f-0487-480e-bec2-9ac721974cf3: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-44c1169f-0487-480e-bec2-9ac721974cf3 in namespace emptydir-wrapper-2769, will wait for the garbage collector to delete the pods +Aug 3 08:04:56.795: INFO: Deleting ReplicationController wrapped-volume-race-44c1169f-0487-480e-bec2-9ac721974cf3 took: 36.699707ms +Aug 3 08:04:56.897: INFO: Terminating ReplicationController wrapped-volume-race-44c1169f-0487-480e-bec2-9ac721974cf3 pods took: 102.420257ms +STEP: Creating RC which spawns configmap-volume pods +Aug 3 08:05:02.731: INFO: Pod name wrapped-volume-race-9571725f-8107-4087-b332-0b4352ee6254: Found 0 pods out of 5 +Aug 3 08:05:07.751: INFO: Pod name wrapped-volume-race-9571725f-8107-4087-b332-0b4352ee6254: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-9571725f-8107-4087-b332-0b4352ee6254 in namespace emptydir-wrapper-2769, will wait for the garbage collector to delete the pods +Aug 3 08:05:21.890: INFO: Deleting ReplicationController wrapped-volume-race-9571725f-8107-4087-b332-0b4352ee6254 took: 18.342066ms +Aug 3 08:05:21.991: INFO: Terminating ReplicationController wrapped-volume-race-9571725f-8107-4087-b332-0b4352ee6254 pods took: 100.964682ms +STEP: Creating RC which spawns configmap-volume pods +Aug 3 08:05:27.240: INFO: Pod name wrapped-volume-race-bd4251b0-3468-4bfe-952c-3f9991cd6afc: Found 0 pods out of 5 +Aug 3 08:05:32.258: INFO: Pod name wrapped-volume-race-bd4251b0-3468-4bfe-952c-3f9991cd6afc: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-bd4251b0-3468-4bfe-952c-3f9991cd6afc in namespace emptydir-wrapper-2769, will wait for the garbage collector to delete the pods +Aug 3 08:05:44.388: INFO: Deleting ReplicationController wrapped-volume-race-bd4251b0-3468-4bfe-952c-3f9991cd6afc took: 31.523627ms +Aug 3 08:05:44.488: INFO: Terminating ReplicationController wrapped-volume-race-bd4251b0-3468-4bfe-952c-3f9991cd6afc pods took: 100.213454ms +STEP: Cleaning up the configMaps +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:05:50.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-2769" for this suite. + +• [SLOW TEST:71.450 seconds] +[sig-storage] EmptyDir wrapper volumes +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":346,"completed":308,"skipped":5981,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:05:50.514: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Aug 3 08:05:50.574: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Aug 3 08:05:50.594: INFO: Waiting for terminating namespaces to be deleted... +Aug 3 08:05:50.601: INFO: +Logging pods the apiserver thinks is on node dce-10-6-213-40 before test +Aug 3 08:05:50.621: INFO: dce-system-dnsservice-5fd54fd444-4b57d from dce-system started at 2022-08-03 03:54:34 +0000 UTC (1 container statuses recorded) +Aug 3 08:05:50.621: INFO: Container dce-system-dnsservice ready: true, restart count 0 +Aug 3 08:05:50.621: INFO: calico-node-ftbqq from kube-system started at 2022-08-01 07:26:41 +0000 UTC (1 container statuses recorded) +Aug 3 08:05:50.621: INFO: Container calico-node ready: true, restart count 0 +Aug 3 08:05:50.621: INFO: coredns-coredns-6b6c46d8b7-5dgzm from kube-system started at 2022-08-02 09:40:48 +0000 UTC (1 container statuses recorded) +Aug 3 08:05:50.621: INFO: Container coredns ready: true, restart count 0 +Aug 3 08:05:50.621: INFO: coredns-coredns-6b6c46d8b7-tb89f from kube-system started at 2022-08-02 09:40:48 +0000 UTC (1 container statuses recorded) +Aug 3 08:05:50.621: INFO: Container coredns ready: true, restart count 0 +Aug 3 08:05:50.621: INFO: dce-engine-htt6p from kube-system started at 2022-08-01 07:26:41 +0000 UTC (1 container statuses recorded) +Aug 3 08:05:50.621: INFO: Container dce-engine ready: true, restart count 0 +Aug 3 08:05:50.621: INFO: dce-kube-apiserver-proxy-dce-10-6-213-40 from kube-system started at 2022-08-01 07:26:27 +0000 UTC (1 container statuses recorded) +Aug 3 08:05:50.621: INFO: Container dce-kube-apiserver-proxy ready: true, restart count 0 +Aug 3 08:05:50.621: INFO: dce-parcel-agent-5xx9x from kube-system started at 2022-08-01 07:26:41 +0000 UTC (1 container statuses recorded) +Aug 3 08:05:50.621: INFO: Container dce-parcel-agent ready: true, restart count 1 +Aug 3 08:05:50.621: INFO: dce-uds-host-driver-2w76c from kube-system started at 2022-08-02 09:36:09 +0000 UTC (2 container statuses recorded) +Aug 3 08:05:50.621: INFO: Container dce-uds-csi-driver-prober ready: true, restart count 0 +Aug 3 08:05:50.621: INFO: Container metrics-collector ready: true, restart count 0 +Aug 3 08:05:50.621: INFO: dce-uds-policy-controller-6f4848f45d-8jhgc from kube-system started at 2022-08-02 09:40:48 +0000 UTC (1 container statuses recorded) +Aug 3 08:05:50.621: INFO: Container dce-uds-policy-controller ready: true, restart count 0 +Aug 3 08:05:50.621: INFO: dce-uds-snapshot-controller-7b76dc77c9-5tkg8 from kube-system started at 2022-08-02 09:40:48 +0000 UTC (1 container statuses recorded) +Aug 3 08:05:50.621: INFO: Container snapshotter ready: true, restart count 2 +Aug 3 08:05:50.621: INFO: kube-proxy-fpf4g from kube-system started at 2022-08-01 07:26:41 +0000 UTC (1 container statuses recorded) +Aug 3 08:05:50.621: INFO: Container kube-proxy ready: true, restart count 0 +Aug 3 08:05:50.621: INFO: metrics-server-55db7974f8-2jq52 from kube-system started at 2022-08-02 09:40:49 +0000 UTC (2 container statuses recorded) +Aug 3 08:05:50.621: INFO: Container metrics-server ready: true, restart count 0 +Aug 3 08:05:50.621: INFO: Container metrics-server-nanny ready: true, restart count 0 +Aug 3 08:05:50.621: INFO: node-local-dns-c7shk from kube-system started at 2022-08-02 07:46:48 +0000 UTC (1 container statuses recorded) +Aug 3 08:05:50.621: INFO: Container node-cache ready: true, restart count 0 +Aug 3 08:05:50.621: INFO: sonobuoy-systemd-logs-daemon-set-10147ad5bf5a4ba1-xplgl from sonobuoy started at 2022-08-03 06:16:15 +0000 UTC (2 container statuses recorded) +Aug 3 08:05:50.621: INFO: Container sonobuoy-worker ready: true, restart count 0 +Aug 3 08:05:50.621: INFO: Container systemd-logs ready: true, restart count 0 +Aug 3 08:05:50.621: INFO: +Logging pods the apiserver thinks is on node dce-10-6-213-50 before test +Aug 3 08:05:50.635: INFO: calico-node-s6xjf from kube-system started at 2022-08-01 07:26:47 +0000 UTC (1 container statuses recorded) +Aug 3 08:05:50.635: INFO: Container calico-node ready: true, restart count 0 +Aug 3 08:05:50.635: INFO: dce-engine-6d4wp from kube-system started at 2022-08-01 07:26:47 +0000 UTC (1 container statuses recorded) +Aug 3 08:05:50.635: INFO: Container dce-engine ready: true, restart count 0 +Aug 3 08:05:50.635: INFO: dce-kube-apiserver-proxy-dce-10-6-213-50 from kube-system started at 2022-08-01 07:26:33 +0000 UTC (1 container statuses recorded) +Aug 3 08:05:50.635: INFO: Container dce-kube-apiserver-proxy ready: true, restart count 0 +Aug 3 08:05:50.635: INFO: dce-parcel-agent-t4d24 from kube-system started at 2022-08-01 07:26:47 +0000 UTC (1 container statuses recorded) +Aug 3 08:05:50.635: INFO: Container dce-parcel-agent ready: true, restart count 0 +Aug 3 08:05:50.635: INFO: dce-uds-host-driver-nqcxc from kube-system started at 2022-08-02 09:40:52 +0000 UTC (2 container statuses recorded) +Aug 3 08:05:50.635: INFO: Container dce-uds-csi-driver-prober ready: true, restart count 0 +Aug 3 08:05:50.635: INFO: Container metrics-collector ready: true, restart count 0 +Aug 3 08:05:50.635: INFO: kube-proxy-j6g24 from kube-system started at 2022-08-01 07:26:47 +0000 UTC (1 container statuses recorded) +Aug 3 08:05:50.635: INFO: Container kube-proxy ready: true, restart count 0 +Aug 3 08:05:50.635: INFO: node-local-dns-dqpd9 from kube-system started at 2022-08-03 08:02:18 +0000 UTC (1 container statuses recorded) +Aug 3 08:05:50.635: INFO: Container node-cache ready: true, restart count 0 +Aug 3 08:05:50.635: INFO: sonobuoy from sonobuoy started at 2022-08-03 06:16:12 +0000 UTC (1 container statuses recorded) +Aug 3 08:05:50.635: INFO: Container kube-sonobuoy ready: true, restart count 0 +Aug 3 08:05:50.635: INFO: sonobuoy-e2e-job-eb6a0f3fa9794033 from sonobuoy started at 2022-08-03 06:16:15 +0000 UTC (2 container statuses recorded) +Aug 3 08:05:50.635: INFO: Container e2e ready: true, restart count 0 +Aug 3 08:05:50.635: INFO: Container sonobuoy-worker ready: true, restart count 0 +Aug 3 08:05:50.635: INFO: sonobuoy-systemd-logs-daemon-set-10147ad5bf5a4ba1-gxfgs from sonobuoy started at 2022-08-03 06:16:15 +0000 UTC (2 container statuses recorded) +Aug 3 08:05:50.635: INFO: Container sonobuoy-worker ready: true, restart count 0 +Aug 3 08:05:50.635: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates that NodeSelector is respected if matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-03f2981b-921c-44ca-8dcb-8784b031bc3c 42 +STEP: Trying to relaunch the pod, now with labels. +STEP: removing the label kubernetes.io/e2e-03f2981b-921c-44ca-8dcb-8784b031bc3c off the node dce-10-6-213-50 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-03f2981b-921c-44ca-8dcb-8784b031bc3c +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:06:00.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-5957" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 + +• [SLOW TEST:10.333 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates that NodeSelector is respected if matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":346,"completed":309,"skipped":6016,"failed":0} +[sig-apps] Deployment + deployment should support rollover [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:06:00.847: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should support rollover [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 08:06:00.940: INFO: Pod name rollover-pod: Found 0 pods out of 1 +Aug 3 08:06:05.952: INFO: Pod name rollover-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Aug 3 08:06:05.952: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready +Aug 3 08:06:07.967: INFO: Creating deployment "test-rollover-deployment" +Aug 3 08:06:07.981: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations +Aug 3 08:06:09.995: INFO: Check revision of new replica set for deployment "test-rollover-deployment" +Aug 3 08:06:10.009: INFO: Ensure that both replica sets have 1 created replica +Aug 3 08:06:10.023: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update +Aug 3 08:06:10.039: INFO: Updating deployment test-rollover-deployment +Aug 3 08:06:10.039: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller +Aug 3 08:06:12.063: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 +Aug 3 08:06:12.080: INFO: Make sure deployment "test-rollover-deployment" is complete +Aug 3 08:06:12.094: INFO: all replica sets need to contain the pod-template-hash label +Aug 3 08:06:12.094: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 8, 6, 8, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 6, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 8, 6, 10, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 6, 7, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 08:06:14.113: INFO: all replica sets need to contain the pod-template-hash label +Aug 3 08:06:14.113: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 8, 6, 8, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 6, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 8, 6, 10, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 6, 7, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 08:06:16.115: INFO: all replica sets need to contain the pod-template-hash label +Aug 3 08:06:16.115: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 8, 6, 8, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 6, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 8, 6, 14, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 6, 7, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 08:06:18.112: INFO: all replica sets need to contain the pod-template-hash label +Aug 3 08:06:18.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 8, 6, 8, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 6, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 8, 6, 14, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 6, 7, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 08:06:20.110: INFO: all replica sets need to contain the pod-template-hash label +Aug 3 08:06:20.110: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 8, 6, 8, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 6, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 8, 6, 14, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 6, 7, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 08:06:22.117: INFO: all replica sets need to contain the pod-template-hash label +Aug 3 08:06:22.117: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 8, 6, 8, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 6, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 8, 6, 14, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 6, 7, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 08:06:24.109: INFO: all replica sets need to contain the pod-template-hash label +Aug 3 08:06:24.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 8, 6, 8, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 6, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 8, 6, 14, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 6, 7, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 08:06:26.117: INFO: +Aug 3 08:06:26.117: INFO: Ensure that both old replica sets have no replicas +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Aug 3 08:06:26.137: INFO: Deployment "test-rollover-deployment": +&Deployment{ObjectMeta:{test-rollover-deployment deployment-5258 232a94ed-5bb3-4648-9290-8cb5f7f7ccc2 643804 2 2022-08-03 08:06:07 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004b9ae48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-08-03 08:06:08 +0000 UTC,LastTransitionTime:2022-08-03 08:06:08 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-668b7f667d" has successfully progressed.,LastUpdateTime:2022-08-03 08:06:24 +0000 UTC,LastTransitionTime:2022-08-03 08:06:07 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Aug 3 08:06:26.147: INFO: New ReplicaSet "test-rollover-deployment-668b7f667d" of Deployment "test-rollover-deployment": +&ReplicaSet{ObjectMeta:{test-rollover-deployment-668b7f667d deployment-5258 90dbe337-29a3-4ee4-8e9a-6c02c402c6d7 643793 2 2022-08-03 08:06:10 +0000 UTC map[name:rollover-pod pod-template-hash:668b7f667d] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 232a94ed-5bb3-4648-9290-8cb5f7f7ccc2 0xc004b9b2b7 0xc004b9b2b8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 668b7f667d,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:668b7f667d] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004b9b328 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Aug 3 08:06:26.147: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": +Aug 3 08:06:26.148: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-5258 9baaf414-41d0-4f07-b2cb-1518eda63e34 643802 2 2022-08-03 08:06:00 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 232a94ed-5bb3-4648-9290-8cb5f7f7ccc2 0xc004b9b1e7 0xc004b9b1e8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004b9b248 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Aug 3 08:06:26.148: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-784bc44b77 deployment-5258 e2a41bcd-5457-402a-8c8d-ee2ab1272d64 643722 2 2022-08-03 08:06:07 +0000 UTC map[name:rollover-pod pod-template-hash:784bc44b77] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 232a94ed-5bb3-4648-9290-8cb5f7f7ccc2 0xc004b9b397 0xc004b9b398}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 784bc44b77,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:784bc44b77] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004b9b408 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Aug 3 08:06:26.157: INFO: Pod "test-rollover-deployment-668b7f667d-bpcnh" is available: +&Pod{ObjectMeta:{test-rollover-deployment-668b7f667d-bpcnh test-rollover-deployment-668b7f667d- deployment-5258 6d67cbcb-3cb7-435e-a802-30d467aeaf4d 643760 0 2022-08-03 08:06:10 +0000 UTC map[name:rollover-pod pod-template-hash:668b7f667d] map[cni.projectcalico.org/ipv4pools:["default-ipv4-ippool"] dce.daocloud.io/parcel.egress.burst:0 dce.daocloud.io/parcel.egress.rate:0 dce.daocloud.io/parcel.ingress.burst:0 dce.daocloud.io/parcel.ingress.rate:0] [{apps/v1 ReplicaSet test-rollover-deployment-668b7f667d 90dbe337-29a3-4ee4-8e9a-6c02c402c6d7 0xc0045af117 0xc0045af118}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pnl5r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pnl5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 08:06:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 08:06:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 08:06:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 08:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.50,PodIP:172.29.175.43,StartTime:2022-08-03 08:06:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-03 08:06:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43,ContainerID:docker://c9c732660142e702697f361d0208c69152c9b43eb086a5b4d822462f4bd8a057,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.29.175.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:06:26.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-5258" for this suite. + +• [SLOW TEST:25.332 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + deployment should support rollover [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":346,"completed":310,"skipped":6016,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:06:26.180: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 +STEP: create the container to handle the HTTPGet hook request. +Aug 3 08:06:26.275: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Aug 3 08:06:28.321: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Aug 3 08:06:30.284: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Aug 3 08:06:30.305: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) +Aug 3 08:06:32.318: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) +Aug 3 08:06:34.314: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Aug 3 08:06:34.380: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Aug 3 08:06:34.386: INFO: Pod pod-with-poststart-http-hook still exists +Aug 3 08:06:36.386: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Aug 3 08:06:36.400: INFO: Pod pod-with-poststart-http-hook still exists +Aug 3 08:06:38.388: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Aug 3 08:06:38.403: INFO: Pod pod-with-poststart-http-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:06:38.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-2986" for this suite. + +• [SLOW TEST:12.244 seconds] +[sig-node] Container Lifecycle Hook +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44 + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":346,"completed":311,"skipped":6028,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if not matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:06:38.425: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Aug 3 08:06:38.491: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Aug 3 08:06:38.514: INFO: Waiting for terminating namespaces to be deleted... +Aug 3 08:06:38.521: INFO: +Logging pods the apiserver thinks is on node dce-10-6-213-40 before test +Aug 3 08:06:38.534: INFO: pod-handle-http-request from container-lifecycle-hook-2986 started at 2022-08-03 08:06:26 +0000 UTC (1 container statuses recorded) +Aug 3 08:06:38.534: INFO: Container agnhost-container ready: true, restart count 0 +Aug 3 08:06:38.534: INFO: dce-system-dnsservice-5fd54fd444-4b57d from dce-system started at 2022-08-03 03:54:34 +0000 UTC (1 container statuses recorded) +Aug 3 08:06:38.534: INFO: Container dce-system-dnsservice ready: true, restart count 0 +Aug 3 08:06:38.534: INFO: calico-node-ftbqq from kube-system started at 2022-08-01 07:26:41 +0000 UTC (1 container statuses recorded) +Aug 3 08:06:38.534: INFO: Container calico-node ready: true, restart count 0 +Aug 3 08:06:38.534: INFO: coredns-coredns-6b6c46d8b7-5dgzm from kube-system started at 2022-08-02 09:40:48 +0000 UTC (1 container statuses recorded) +Aug 3 08:06:38.534: INFO: Container coredns ready: true, restart count 0 +Aug 3 08:06:38.534: INFO: coredns-coredns-6b6c46d8b7-tb89f from kube-system started at 2022-08-02 09:40:48 +0000 UTC (1 container statuses recorded) +Aug 3 08:06:38.534: INFO: Container coredns ready: true, restart count 0 +Aug 3 08:06:38.534: INFO: dce-engine-htt6p from kube-system started at 2022-08-01 07:26:41 +0000 UTC (1 container statuses recorded) +Aug 3 08:06:38.534: INFO: Container dce-engine ready: true, restart count 0 +Aug 3 08:06:38.534: INFO: dce-kube-apiserver-proxy-dce-10-6-213-40 from kube-system started at 2022-08-01 07:26:27 +0000 UTC (1 container statuses recorded) +Aug 3 08:06:38.534: INFO: Container dce-kube-apiserver-proxy ready: true, restart count 0 +Aug 3 08:06:38.534: INFO: dce-parcel-agent-5xx9x from kube-system started at 2022-08-01 07:26:41 +0000 UTC (1 container statuses recorded) +Aug 3 08:06:38.534: INFO: Container dce-parcel-agent ready: true, restart count 1 +Aug 3 08:06:38.534: INFO: dce-uds-host-driver-2w76c from kube-system started at 2022-08-02 09:36:09 +0000 UTC (2 container statuses recorded) +Aug 3 08:06:38.534: INFO: Container dce-uds-csi-driver-prober ready: true, restart count 0 +Aug 3 08:06:38.534: INFO: Container metrics-collector ready: true, restart count 0 +Aug 3 08:06:38.534: INFO: dce-uds-policy-controller-6f4848f45d-8jhgc from kube-system started at 2022-08-02 09:40:48 +0000 UTC (1 container statuses recorded) +Aug 3 08:06:38.534: INFO: Container dce-uds-policy-controller ready: true, restart count 0 +Aug 3 08:06:38.534: INFO: dce-uds-snapshot-controller-7b76dc77c9-5tkg8 from kube-system started at 2022-08-02 09:40:48 +0000 UTC (1 container statuses recorded) +Aug 3 08:06:38.534: INFO: Container snapshotter ready: true, restart count 2 +Aug 3 08:06:38.534: INFO: kube-proxy-fpf4g from kube-system started at 2022-08-01 07:26:41 +0000 UTC (1 container statuses recorded) +Aug 3 08:06:38.534: INFO: Container kube-proxy ready: true, restart count 0 +Aug 3 08:06:38.534: INFO: metrics-server-55db7974f8-2jq52 from kube-system started at 2022-08-02 09:40:49 +0000 UTC (2 container statuses recorded) +Aug 3 08:06:38.534: INFO: Container metrics-server ready: true, restart count 0 +Aug 3 08:06:38.534: INFO: Container metrics-server-nanny ready: true, restart count 0 +Aug 3 08:06:38.534: INFO: node-local-dns-c7shk from kube-system started at 2022-08-02 07:46:48 +0000 UTC (1 container statuses recorded) +Aug 3 08:06:38.534: INFO: Container node-cache ready: true, restart count 0 +Aug 3 08:06:38.534: INFO: sonobuoy-systemd-logs-daemon-set-10147ad5bf5a4ba1-xplgl from sonobuoy started at 2022-08-03 06:16:15 +0000 UTC (2 container statuses recorded) +Aug 3 08:06:38.534: INFO: Container sonobuoy-worker ready: true, restart count 0 +Aug 3 08:06:38.534: INFO: Container systemd-logs ready: true, restart count 0 +Aug 3 08:06:38.534: INFO: +Logging pods the apiserver thinks is on node dce-10-6-213-50 before test +Aug 3 08:06:38.550: INFO: calico-node-s6xjf from kube-system started at 2022-08-01 07:26:47 +0000 UTC (1 container statuses recorded) +Aug 3 08:06:38.550: INFO: Container calico-node ready: true, restart count 0 +Aug 3 08:06:38.550: INFO: dce-engine-6d4wp from kube-system started at 2022-08-01 07:26:47 +0000 UTC (1 container statuses recorded) +Aug 3 08:06:38.550: INFO: Container dce-engine ready: true, restart count 0 +Aug 3 08:06:38.550: INFO: dce-kube-apiserver-proxy-dce-10-6-213-50 from kube-system started at 2022-08-01 07:26:33 +0000 UTC (1 container statuses recorded) +Aug 3 08:06:38.550: INFO: Container dce-kube-apiserver-proxy ready: true, restart count 0 +Aug 3 08:06:38.550: INFO: dce-parcel-agent-t4d24 from kube-system started at 2022-08-01 07:26:47 +0000 UTC (1 container statuses recorded) +Aug 3 08:06:38.550: INFO: Container dce-parcel-agent ready: true, restart count 0 +Aug 3 08:06:38.550: INFO: dce-uds-host-driver-nqcxc from kube-system started at 2022-08-02 09:40:52 +0000 UTC (2 container statuses recorded) +Aug 3 08:06:38.550: INFO: Container dce-uds-csi-driver-prober ready: true, restart count 0 +Aug 3 08:06:38.550: INFO: Container metrics-collector ready: true, restart count 0 +Aug 3 08:06:38.550: INFO: kube-proxy-j6g24 from kube-system started at 2022-08-01 07:26:47 +0000 UTC (1 container statuses recorded) +Aug 3 08:06:38.550: INFO: Container kube-proxy ready: true, restart count 0 +Aug 3 08:06:38.550: INFO: node-local-dns-dqpd9 from kube-system started at 2022-08-03 08:02:18 +0000 UTC (1 container statuses recorded) +Aug 3 08:06:38.550: INFO: Container node-cache ready: true, restart count 0 +Aug 3 08:06:38.550: INFO: sonobuoy from sonobuoy started at 2022-08-03 06:16:12 +0000 UTC (1 container statuses recorded) +Aug 3 08:06:38.550: INFO: Container kube-sonobuoy ready: true, restart count 0 +Aug 3 08:06:38.550: INFO: sonobuoy-e2e-job-eb6a0f3fa9794033 from sonobuoy started at 2022-08-03 06:16:15 +0000 UTC (2 container statuses recorded) +Aug 3 08:06:38.550: INFO: Container e2e ready: true, restart count 0 +Aug 3 08:06:38.550: INFO: Container sonobuoy-worker ready: true, restart count 0 +Aug 3 08:06:38.550: INFO: sonobuoy-systemd-logs-daemon-set-10147ad5bf5a4ba1-gxfgs from sonobuoy started at 2022-08-03 06:16:15 +0000 UTC (2 container statuses recorded) +Aug 3 08:06:38.550: INFO: Container sonobuoy-worker ready: true, restart count 0 +Aug 3 08:06:38.550: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates that NodeSelector is respected if not matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to schedule Pod with nonempty NodeSelector. +STEP: Considering event: +Type = [Warning], Name = [restricted-pod.1707c73071c93e30], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:06:39.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-7477" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":346,"completed":312,"skipped":6050,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl label + should update the label on a resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:06:39.640: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl label + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1331 +STEP: creating the pod +Aug 3 08:06:39.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-7593 create -f -' +Aug 3 08:06:41.092: INFO: stderr: "" +Aug 3 08:06:41.092: INFO: stdout: "pod/pause created\n" +Aug 3 08:06:41.092: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] +Aug 3 08:06:41.092: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7593" to be "running and ready" +Aug 3 08:06:41.097: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.239857ms +Aug 3 08:06:43.109: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017271219s +Aug 3 08:06:45.123: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.031222769s +Aug 3 08:06:45.123: INFO: Pod "pause" satisfied condition "running and ready" +Aug 3 08:06:45.123: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] +[It] should update the label on a resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: adding the label testing-label with value testing-label-value to a pod +Aug 3 08:06:45.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-7593 label pods pause testing-label=testing-label-value' +Aug 3 08:06:45.302: INFO: stderr: "" +Aug 3 08:06:45.303: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod has the label testing-label with the value testing-label-value +Aug 3 08:06:45.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-7593 get pod pause -L testing-label' +Aug 3 08:06:45.418: INFO: stderr: "" +Aug 3 08:06:45.418: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" +STEP: removing the label testing-label of a pod +Aug 3 08:06:45.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-7593 label pods pause testing-label-' +Aug 3 08:06:45.562: INFO: stderr: "" +Aug 3 08:06:45.562: INFO: stdout: "pod/pause unlabeled\n" +STEP: verifying the pod doesn't have the label testing-label +Aug 3 08:06:45.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-7593 get pod pause -L testing-label' +Aug 3 08:06:45.695: INFO: stderr: "" +Aug 3 08:06:45.695: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" +[AfterEach] Kubectl label + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1337 +STEP: using delete to clean up resources +Aug 3 08:06:45.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-7593 delete --grace-period=0 --force -f -' +Aug 3 08:06:45.843: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Aug 3 08:06:45.843: INFO: stdout: "pod \"pause\" force deleted\n" +Aug 3 08:06:45.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-7593 get rc,svc -l name=pause --no-headers' +Aug 3 08:06:45.968: INFO: stderr: "No resources found in kubectl-7593 namespace.\n" +Aug 3 08:06:45.968: INFO: stdout: "" +Aug 3 08:06:45.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=kubectl-7593 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Aug 3 08:06:46.087: INFO: stderr: "" +Aug 3 08:06:46.087: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:06:46.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7593" for this suite. + +• [SLOW TEST:6.469 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Kubectl label + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1329 + should update the label on a resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":346,"completed":313,"skipped":6065,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:06:46.111: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Aug 3 08:06:46.206: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3fbc91bc-71ed-4840-92de-913b0bc3812f" in namespace "downward-api-9571" to be "Succeeded or Failed" +Aug 3 08:06:46.213: INFO: Pod "downwardapi-volume-3fbc91bc-71ed-4840-92de-913b0bc3812f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.802758ms +Aug 3 08:06:48.225: INFO: Pod "downwardapi-volume-3fbc91bc-71ed-4840-92de-913b0bc3812f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0191241s +Aug 3 08:06:50.233: INFO: Pod "downwardapi-volume-3fbc91bc-71ed-4840-92de-913b0bc3812f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026634202s +STEP: Saw pod success +Aug 3 08:06:50.233: INFO: Pod "downwardapi-volume-3fbc91bc-71ed-4840-92de-913b0bc3812f" satisfied condition "Succeeded or Failed" +Aug 3 08:06:50.238: INFO: Trying to get logs from node dce-10-6-213-50 pod downwardapi-volume-3fbc91bc-71ed-4840-92de-913b0bc3812f container client-container: +STEP: delete the pod +Aug 3 08:06:50.300: INFO: Waiting for pod downwardapi-volume-3fbc91bc-71ed-4840-92de-913b0bc3812f to disappear +Aug 3 08:06:50.304: INFO: Pod downwardapi-volume-3fbc91bc-71ed-4840-92de-913b0bc3812f no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:06:50.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9571" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":314,"skipped":6120,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:06:50.320: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +[It] getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 08:06:50.382: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:06:50.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-4972" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":346,"completed":315,"skipped":6183,"failed":0} +SSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates lower priority pod preemption by critical pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:06:50.979: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename sched-preemption +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Aug 3 08:06:51.073: INFO: Waiting up to 1m0s for all nodes to be ready +Aug 3 08:07:51.151: INFO: Waiting for terminating namespaces to be deleted... +[It] validates lower priority pod preemption by critical pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create pods that use 4/5 of node resources. +Aug 3 08:07:51.204: INFO: Created pod: pod0-0-sched-preemption-low-priority +Aug 3 08:07:51.211: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Aug 3 08:07:51.231: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Aug 3 08:07:51.244: INFO: Created pod: pod1-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. +STEP: Run a critical pod that use same resources as that of a lower priority pod +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:08:11.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-7565" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 + +• [SLOW TEST:80.886 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates lower priority pod preemption by critical pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":346,"completed":316,"skipped":6187,"failed":0} +SSS +------------------------------ +[sig-node] Security Context When creating a pod with privileged + should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:08:11.865: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename security-context-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 08:08:11.994: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-1305ced7-8683-468e-b8fa-a93be9e5781d" in namespace "security-context-test-999" to be "Succeeded or Failed" +Aug 3 08:08:12.004: INFO: Pod "busybox-privileged-false-1305ced7-8683-468e-b8fa-a93be9e5781d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.224421ms +Aug 3 08:08:14.018: INFO: Pod "busybox-privileged-false-1305ced7-8683-468e-b8fa-a93be9e5781d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023342772s +Aug 3 08:08:16.030: INFO: Pod "busybox-privileged-false-1305ced7-8683-468e-b8fa-a93be9e5781d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03596364s +Aug 3 08:08:18.045: INFO: Pod "busybox-privileged-false-1305ced7-8683-468e-b8fa-a93be9e5781d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.050899939s +Aug 3 08:08:18.045: INFO: Pod "busybox-privileged-false-1305ced7-8683-468e-b8fa-a93be9e5781d" satisfied condition "Succeeded or Failed" +Aug 3 08:08:18.059: INFO: Got logs for pod "busybox-privileged-false-1305ced7-8683-468e-b8fa-a93be9e5781d": "ip: RTNETLINK answers: Operation not permitted\n" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:08:18.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-999" for this suite. + +• [SLOW TEST:6.225 seconds] +[sig-node] Security Context +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + When creating a pod with privileged + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 + should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":317,"skipped":6190,"failed":0} +[sig-storage] Downward API volume + should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:08:18.090: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Aug 3 08:08:18.192: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b99acdde-b71d-4cd8-bc2b-b8dc07f137fe" in namespace "downward-api-1457" to be "Succeeded or Failed" +Aug 3 08:08:18.203: INFO: Pod "downwardapi-volume-b99acdde-b71d-4cd8-bc2b-b8dc07f137fe": Phase="Pending", Reason="", readiness=false. Elapsed: 10.23614ms +Aug 3 08:08:20.213: INFO: Pod "downwardapi-volume-b99acdde-b71d-4cd8-bc2b-b8dc07f137fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020682165s +Aug 3 08:08:22.226: INFO: Pod "downwardapi-volume-b99acdde-b71d-4cd8-bc2b-b8dc07f137fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034060765s +STEP: Saw pod success +Aug 3 08:08:22.226: INFO: Pod "downwardapi-volume-b99acdde-b71d-4cd8-bc2b-b8dc07f137fe" satisfied condition "Succeeded or Failed" +Aug 3 08:08:22.233: INFO: Trying to get logs from node dce-10-6-213-50 pod downwardapi-volume-b99acdde-b71d-4cd8-bc2b-b8dc07f137fe container client-container: +STEP: delete the pod +Aug 3 08:08:22.278: INFO: Waiting for pod downwardapi-volume-b99acdde-b71d-4cd8-bc2b-b8dc07f137fe to disappear +Aug 3 08:08:22.284: INFO: Pod downwardapi-volume-b99acdde-b71d-4cd8-bc2b-b8dc07f137fe no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:08:22.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1457" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":346,"completed":318,"skipped":6190,"failed":0} +SSSSSS +------------------------------ +[sig-node] PodTemplates + should delete a collection of pod templates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:08:22.304: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename podtemplate +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a collection of pod templates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of pod templates +Aug 3 08:08:22.416: INFO: created test-podtemplate-1 +Aug 3 08:08:22.423: INFO: created test-podtemplate-2 +Aug 3 08:08:22.431: INFO: created test-podtemplate-3 +STEP: get a list of pod templates with a label in the current namespace +STEP: delete collection of pod templates +Aug 3 08:08:22.441: INFO: requesting DeleteCollection of pod templates +STEP: check that the list of pod templates matches the requested quantity +Aug 3 08:08:22.461: INFO: requesting list of pod templates to confirm quantity +[AfterEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:08:22.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-7960" for this suite. +•{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":346,"completed":319,"skipped":6196,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:08:22.488: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:143 +[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 08:08:22.635: INFO: Creating simple daemon set daemon-set +STEP: Check that daemon pods launch on every node of the cluster. +Aug 3 08:08:22.648: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:22.648: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:22.648: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:22.658: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 08:08:22.658: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 08:08:23.676: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:23.676: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:23.677: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:23.687: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 08:08:23.687: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 08:08:24.670: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:24.670: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:24.670: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:24.678: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 08:08:24.678: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 08:08:25.671: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:25.671: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:25.671: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:25.676: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 08:08:25.677: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 08:08:26.670: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:26.670: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:26.670: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:26.677: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 3 08:08:26.677: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 08:08:27.676: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:27.676: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:27.676: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:27.683: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Aug 3 08:08:27.683: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +STEP: Update daemon pods image. +STEP: Check that daemon pods images are updated. +Aug 3 08:08:27.756: INFO: Wrong image for pod: daemon-set-4xdnb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.33, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. +Aug 3 08:08:27.756: INFO: Wrong image for pod: daemon-set-zdhjh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.33, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. +Aug 3 08:08:27.767: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:27.768: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:27.768: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:28.779: INFO: Wrong image for pod: daemon-set-4xdnb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.33, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. +Aug 3 08:08:28.788: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:28.788: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:28.788: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:29.777: INFO: Wrong image for pod: daemon-set-4xdnb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.33, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. +Aug 3 08:08:29.787: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:29.787: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:29.787: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:30.780: INFO: Wrong image for pod: daemon-set-4xdnb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.33, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. +Aug 3 08:08:30.789: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:30.789: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:30.789: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:31.780: INFO: Wrong image for pod: daemon-set-4xdnb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.33, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. +Aug 3 08:08:31.787: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:31.787: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:31.787: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:32.776: INFO: Wrong image for pod: daemon-set-4xdnb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.33, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. +Aug 3 08:08:32.776: INFO: Pod daemon-set-j7xlj is not available +Aug 3 08:08:32.784: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:32.784: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:32.784: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:33.777: INFO: Wrong image for pod: daemon-set-4xdnb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.33, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. +Aug 3 08:08:33.778: INFO: Pod daemon-set-j7xlj is not available +Aug 3 08:08:33.794: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:33.794: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:33.794: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:34.775: INFO: Wrong image for pod: daemon-set-4xdnb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.33, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. +Aug 3 08:08:34.775: INFO: Pod daemon-set-j7xlj is not available +Aug 3 08:08:34.782: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:34.782: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:34.782: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:35.789: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:35.790: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:35.790: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:36.805: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:36.805: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:36.805: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:37.789: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:37.790: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:37.790: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:38.780: INFO: Pod daemon-set-9xwmf is not available +Aug 3 08:08:38.788: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:38.788: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:38.788: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +STEP: Check that daemon pods are still running on every node of the cluster. +Aug 3 08:08:38.798: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:38.798: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:38.798: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:38.803: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 3 08:08:38.803: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 08:08:39.820: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:39.821: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:39.821: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:39.826: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 3 08:08:39.826: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 08:08:40.816: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:40.816: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:40.816: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:40.821: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 3 08:08:40.821: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 08:08:41.837: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:41.837: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:41.837: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:41.841: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 3 08:08:41.841: INFO: Node dce-10-6-213-50 is running 0 daemon pod, expected 1 +Aug 3 08:08:42.818: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:42.819: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:42.819: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:08:42.827: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Aug 3 08:08:42.827: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:109 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2436, will wait for the garbage collector to delete the pods +Aug 3 08:08:42.930: INFO: Deleting DaemonSet.extensions daemon-set took: 15.333231ms +Aug 3 08:08:43.030: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.570219ms +Aug 3 08:08:47.244: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 08:08:47.244: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Aug 3 08:08:47.284: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"644852"},"items":null} + +Aug 3 08:08:47.290: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"644852"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:08:47.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-2436" for this suite. + +• [SLOW TEST:24.856 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":346,"completed":320,"skipped":6262,"failed":0} +SSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:08:47.345: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on node default medium +Aug 3 08:08:47.432: INFO: Waiting up to 5m0s for pod "pod-da28bd12-07a8-48cc-aaec-9ded17378728" in namespace "emptydir-4221" to be "Succeeded or Failed" +Aug 3 08:08:47.439: INFO: Pod "pod-da28bd12-07a8-48cc-aaec-9ded17378728": Phase="Pending", Reason="", readiness=false. Elapsed: 6.646908ms +Aug 3 08:08:49.455: INFO: Pod "pod-da28bd12-07a8-48cc-aaec-9ded17378728": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023116613s +Aug 3 08:08:51.467: INFO: Pod "pod-da28bd12-07a8-48cc-aaec-9ded17378728": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034937652s +Aug 3 08:08:53.487: INFO: Pod "pod-da28bd12-07a8-48cc-aaec-9ded17378728": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05526359s +STEP: Saw pod success +Aug 3 08:08:53.488: INFO: Pod "pod-da28bd12-07a8-48cc-aaec-9ded17378728" satisfied condition "Succeeded or Failed" +Aug 3 08:08:53.494: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-da28bd12-07a8-48cc-aaec-9ded17378728 container test-container: +STEP: delete the pod +Aug 3 08:08:53.526: INFO: Waiting for pod pod-da28bd12-07a8-48cc-aaec-9ded17378728 to disappear +Aug 3 08:08:53.529: INFO: Pod pod-da28bd12-07a8-48cc-aaec-9ded17378728 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:08:53.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4221" for this suite. + +• [SLOW TEST:6.205 seconds] +[sig-storage] EmptyDir volumes +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":321,"skipped":6268,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:08:53.551: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-7916 +STEP: creating service affinity-clusterip-transition in namespace services-7916 +STEP: creating replication controller affinity-clusterip-transition in namespace services-7916 +I0803 08:08:53.630067 21 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-7916, replica count: 3 +I0803 08:08:56.681317 21 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0803 08:08:59.682883 21 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 3 08:08:59.696: INFO: Creating new exec pod +Aug 3 08:09:04.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-7916 exec execpod-affinitylpxfg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' +Aug 3 08:09:05.226: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" +Aug 3 08:09:05.226: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 08:09:05.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-7916 exec execpod-affinitylpxfg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.29.240 80' +Aug 3 08:09:05.558: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.29.240 80\nConnection to 172.31.29.240 80 port [tcp/http] succeeded!\n" +Aug 3 08:09:05.558: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 3 08:09:05.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-7916 exec execpod-affinitylpxfg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.31.29.240:80/ ; done' +Aug 3 08:09:06.006: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n" +Aug 3 08:09:06.006: INFO: stdout: "\naffinity-clusterip-transition-xvpch\naffinity-clusterip-transition-xvpch\naffinity-clusterip-transition-xvpch\naffinity-clusterip-transition-xvpch\naffinity-clusterip-transition-xvpch\naffinity-clusterip-transition-xvpch\naffinity-clusterip-transition-xvpch\naffinity-clusterip-transition-xvpch\naffinity-clusterip-transition-xvpch\naffinity-clusterip-transition-xvpch\naffinity-clusterip-transition-xvpch\naffinity-clusterip-transition-xvpch\naffinity-clusterip-transition-xvpch\naffinity-clusterip-transition-xvpch\naffinity-clusterip-transition-xvpch\naffinity-clusterip-transition-xvpch" +Aug 3 08:09:06.006: INFO: Received response from host: affinity-clusterip-transition-xvpch +Aug 3 08:09:06.006: INFO: Received response from host: affinity-clusterip-transition-xvpch +Aug 3 08:09:06.006: INFO: Received response from host: affinity-clusterip-transition-xvpch +Aug 3 08:09:06.006: INFO: Received response from host: affinity-clusterip-transition-xvpch +Aug 3 08:09:06.006: INFO: Received response from host: affinity-clusterip-transition-xvpch +Aug 3 08:09:06.006: INFO: Received response from host: affinity-clusterip-transition-xvpch +Aug 3 08:09:06.006: INFO: Received response from host: affinity-clusterip-transition-xvpch +Aug 3 08:09:06.006: INFO: Received response from host: affinity-clusterip-transition-xvpch +Aug 3 08:09:06.006: INFO: Received response from host: affinity-clusterip-transition-xvpch +Aug 3 08:09:06.006: INFO: Received response from host: affinity-clusterip-transition-xvpch +Aug 3 08:09:06.006: INFO: Received response from host: affinity-clusterip-transition-xvpch +Aug 3 08:09:06.006: INFO: Received response from host: affinity-clusterip-transition-xvpch +Aug 3 08:09:06.006: INFO: Received response from host: affinity-clusterip-transition-xvpch +Aug 3 08:09:06.006: INFO: Received response from host: affinity-clusterip-transition-xvpch +Aug 3 08:09:06.006: INFO: Received response from host: affinity-clusterip-transition-xvpch +Aug 3 08:09:06.006: INFO: Received response from host: affinity-clusterip-transition-xvpch +Aug 3 08:09:36.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-7916 exec execpod-affinitylpxfg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.31.29.240:80/ ; done' +Aug 3 08:09:36.357: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n" +Aug 3 08:09:36.357: INFO: stdout: "\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-222c5\naffinity-clusterip-transition-222c5\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-xvpch\naffinity-clusterip-transition-xvpch\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-xvpch\naffinity-clusterip-transition-222c5\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-222c5\naffinity-clusterip-transition-vhcfg" +Aug 3 08:09:36.357: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.357: INFO: Received response from host: affinity-clusterip-transition-222c5 +Aug 3 08:09:36.357: INFO: Received response from host: affinity-clusterip-transition-222c5 +Aug 3 08:09:36.357: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.357: INFO: Received response from host: affinity-clusterip-transition-xvpch +Aug 3 08:09:36.357: INFO: Received response from host: affinity-clusterip-transition-xvpch +Aug 3 08:09:36.357: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.357: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.357: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.357: INFO: Received response from host: affinity-clusterip-transition-xvpch +Aug 3 08:09:36.357: INFO: Received response from host: affinity-clusterip-transition-222c5 +Aug 3 08:09:36.357: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.357: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.357: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.357: INFO: Received response from host: affinity-clusterip-transition-222c5 +Aug 3 08:09:36.357: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-7916 exec execpod-affinitylpxfg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.31.29.240:80/ ; done' +Aug 3 08:09:36.821: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.29.240:80/\n" +Aug 3 08:09:36.821: INFO: stdout: "\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-vhcfg\naffinity-clusterip-transition-vhcfg" +Aug 3 08:09:36.821: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.821: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.821: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.822: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.822: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.822: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.822: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.822: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.822: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.822: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.822: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.822: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.822: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.822: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.822: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.822: INFO: Received response from host: affinity-clusterip-transition-vhcfg +Aug 3 08:09:36.822: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-7916, will wait for the garbage collector to delete the pods +Aug 3 08:09:36.909: INFO: Deleting ReplicationController affinity-clusterip-transition took: 11.549877ms +Aug 3 08:09:37.011: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 101.29126ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:09:41.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-7916" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:47.505 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":322,"skipped":6347,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should honor timeout [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:09:41.058: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 3 08:09:41.741: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Aug 3 08:09:43.770: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 8, 9, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 9, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 8, 9, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 9, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 3 08:09:46.806: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should honor timeout [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Setting timeout (1s) shorter than webhook latency (5s) +STEP: Registering slow webhook via the AdmissionRegistration API +Aug 3 08:09:56.979: INFO: Waiting for webhook configuration to be ready... +STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) +STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Having no error when timeout is longer than webhook latency +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Having no error when timeout is empty (defaulted to 10s in v1) +STEP: Registering slow webhook via the AdmissionRegistration API +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:10:09.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-9593" for this suite. +STEP: Destroying namespace "webhook-9593-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:28.304 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should honor timeout [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":346,"completed":323,"skipped":6364,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:10:09.363: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap configmap-9588/configmap-test-8befec2b-55b3-4621-b2d5-b9a12e09cdef +STEP: Creating a pod to test consume configMaps +Aug 3 08:10:09.449: INFO: Waiting up to 5m0s for pod "pod-configmaps-a51abbc1-6e00-4c22-8a0b-4e4fcf716f98" in namespace "configmap-9588" to be "Succeeded or Failed" +Aug 3 08:10:09.454: INFO: Pod "pod-configmaps-a51abbc1-6e00-4c22-8a0b-4e4fcf716f98": Phase="Pending", Reason="", readiness=false. Elapsed: 5.573794ms +Aug 3 08:10:11.467: INFO: Pod "pod-configmaps-a51abbc1-6e00-4c22-8a0b-4e4fcf716f98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01835056s +Aug 3 08:10:13.481: INFO: Pod "pod-configmaps-a51abbc1-6e00-4c22-8a0b-4e4fcf716f98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03206309s +STEP: Saw pod success +Aug 3 08:10:13.481: INFO: Pod "pod-configmaps-a51abbc1-6e00-4c22-8a0b-4e4fcf716f98" satisfied condition "Succeeded or Failed" +Aug 3 08:10:13.486: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-configmaps-a51abbc1-6e00-4c22-8a0b-4e4fcf716f98 container env-test: +STEP: delete the pod +Aug 3 08:10:13.526: INFO: Waiting for pod pod-configmaps-a51abbc1-6e00-4c22-8a0b-4e4fcf716f98 to disappear +Aug 3 08:10:13.530: INFO: Pod pod-configmaps-a51abbc1-6e00-4c22-8a0b-4e4fcf716f98 no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:10:13.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-9588" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":346,"completed":324,"skipped":6375,"failed":0} +SSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:10:13.550: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 +STEP: Creating service test in namespace statefulset-5653 +[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating stateful set ss in namespace statefulset-5653 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5653 +Aug 3 08:10:13.686: INFO: Found 0 stateful pods, waiting for 1 +Aug 3 08:10:23.705: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod +Aug 3 08:10:23.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=statefulset-5653 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Aug 3 08:10:23.998: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Aug 3 08:10:23.998: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Aug 3 08:10:23.998: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Aug 3 08:10:24.004: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Aug 3 08:10:34.026: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Aug 3 08:10:34.026: INFO: Waiting for statefulset status.replicas updated to 0 +Aug 3 08:10:34.043: INFO: POD NODE PHASE GRACE CONDITIONS +Aug 3 08:10:34.043: INFO: ss-0 dce-10-6-213-50 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:13 +0000 UTC }] +Aug 3 08:10:34.043: INFO: +Aug 3 08:10:34.043: INFO: StatefulSet ss has not reached scale 3, at 1 +Aug 3 08:10:35.051: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996263506s +Aug 3 08:10:36.062: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98717174s +Aug 3 08:10:37.070: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.977776354s +Aug 3 08:10:38.082: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.969720314s +Aug 3 08:10:39.103: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.957312536s +Aug 3 08:10:40.118: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.935899542s +Aug 3 08:10:41.131: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.919342469s +Aug 3 08:10:42.143: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.907886797s +Aug 3 08:10:43.154: INFO: Verifying statefulset ss doesn't scale past 3 for another 896.130375ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5653 +Aug 3 08:10:44.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=statefulset-5653 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Aug 3 08:10:44.443: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Aug 3 08:10:44.443: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Aug 3 08:10:44.443: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Aug 3 08:10:44.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=statefulset-5653 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Aug 3 08:10:44.782: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Aug 3 08:10:44.782: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Aug 3 08:10:44.782: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Aug 3 08:10:44.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=statefulset-5653 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Aug 3 08:10:45.140: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Aug 3 08:10:45.140: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Aug 3 08:10:45.140: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Aug 3 08:10:45.151: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Aug 3 08:10:45.151: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Aug 3 08:10:45.151: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Scale down will not halt with unhealthy stateful pod +Aug 3 08:10:45.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=statefulset-5653 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Aug 3 08:10:45.466: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Aug 3 08:10:45.466: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Aug 3 08:10:45.466: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Aug 3 08:10:45.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=statefulset-5653 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Aug 3 08:10:45.755: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Aug 3 08:10:45.755: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Aug 3 08:10:45.755: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Aug 3 08:10:45.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=statefulset-5653 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Aug 3 08:10:46.028: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Aug 3 08:10:46.028: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Aug 3 08:10:46.028: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Aug 3 08:10:46.028: INFO: Waiting for statefulset status.replicas updated to 0 +Aug 3 08:10:46.033: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 +Aug 3 08:10:56.047: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Aug 3 08:10:56.047: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Aug 3 08:10:56.047: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Aug 3 08:10:56.071: INFO: POD NODE PHASE GRACE CONDITIONS +Aug 3 08:10:56.071: INFO: ss-0 dce-10-6-213-50 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:13 +0000 UTC }] +Aug 3 08:10:56.071: INFO: ss-1 dce-10-6-213-40 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:34 +0000 UTC }] +Aug 3 08:10:56.071: INFO: ss-2 dce-10-6-213-50 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:34 +0000 UTC }] +Aug 3 08:10:56.071: INFO: +Aug 3 08:10:56.071: INFO: StatefulSet ss has not reached scale 0, at 3 +Aug 3 08:10:57.086: INFO: POD NODE PHASE GRACE CONDITIONS +Aug 3 08:10:57.086: INFO: ss-0 dce-10-6-213-50 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:13 +0000 UTC }] +Aug 3 08:10:57.086: INFO: ss-1 dce-10-6-213-40 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:34 +0000 UTC }] +Aug 3 08:10:57.086: INFO: ss-2 dce-10-6-213-50 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:34 +0000 UTC }] +Aug 3 08:10:57.086: INFO: +Aug 3 08:10:57.086: INFO: StatefulSet ss has not reached scale 0, at 3 +Aug 3 08:10:58.102: INFO: POD NODE PHASE GRACE CONDITIONS +Aug 3 08:10:58.102: INFO: ss-0 dce-10-6-213-50 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:13 +0000 UTC }] +Aug 3 08:10:58.102: INFO: ss-1 dce-10-6-213-40 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:34 +0000 UTC }] +Aug 3 08:10:58.102: INFO: ss-2 dce-10-6-213-50 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:34 +0000 UTC }] +Aug 3 08:10:58.102: INFO: +Aug 3 08:10:58.102: INFO: StatefulSet ss has not reached scale 0, at 3 +Aug 3 08:10:59.112: INFO: POD NODE PHASE GRACE CONDITIONS +Aug 3 08:10:59.112: INFO: ss-0 dce-10-6-213-50 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-03 08:10:13 +0000 UTC }] +Aug 3 08:10:59.112: INFO: +Aug 3 08:10:59.112: INFO: StatefulSet ss has not reached scale 0, at 1 +Aug 3 08:11:00.120: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.951451452s +Aug 3 08:11:01.132: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.944163885s +Aug 3 08:11:02.141: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.931025797s +Aug 3 08:11:03.149: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.923359924s +Aug 3 08:11:04.163: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.913865692s +Aug 3 08:11:05.172: INFO: Verifying statefulset ss doesn't scale past 0 for another 899.531421ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5653 +Aug 3 08:11:06.184: INFO: Scaling statefulset ss to 0 +Aug 3 08:11:06.207: INFO: Waiting for statefulset status.replicas updated to 0 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 +Aug 3 08:11:06.211: INFO: Deleting all statefulset in ns statefulset-5653 +Aug 3 08:11:06.217: INFO: Scaling statefulset ss to 0 +Aug 3 08:11:06.236: INFO: Waiting for statefulset status.replicas updated to 0 +Aug 3 08:11:06.244: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:11:06.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-5653" for this suite. + +• [SLOW TEST:52.742 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":346,"completed":325,"skipped":6384,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:11:06.293: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:143 +[It] should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Aug 3 08:11:06.408: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:11:06.408: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:11:06.408: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:11:06.414: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 08:11:06.415: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 08:11:07.431: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:11:07.431: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:11:07.431: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:11:07.439: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 08:11:07.439: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 08:11:08.428: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:11:08.429: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:11:08.429: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:11:08.434: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 08:11:08.434: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 08:11:09.432: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:11:09.432: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:11:09.432: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:11:09.439: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 08:11:09.439: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 08:11:10.425: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:11:10.425: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:11:10.425: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:11:10.430: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 08:11:10.430: INFO: Node dce-10-6-213-40 is running 0 daemon pod, expected 1 +Aug 3 08:11:11.430: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:11:11.430: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:11:11.430: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:11:11.435: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Aug 3 08:11:11.435: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. +Aug 3 08:11:11.464: INFO: DaemonSet pods can't tolerate node dce-10-6-213-10 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:11:11.464: INFO: DaemonSet pods can't tolerate node dce-10-6-213-20 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:11:11.464: INFO: DaemonSet pods can't tolerate node dce-10-6-213-30 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 3 08:11:11.470: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Aug 3 08:11:11.470: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +STEP: Wait for the failed daemon pod to be completely deleted. +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:109 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7625, will wait for the garbage collector to delete the pods +Aug 3 08:11:12.567: INFO: Deleting DaemonSet.extensions daemon-set took: 19.751286ms +Aug 3 08:11:12.668: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.47454ms +Aug 3 08:11:17.678: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 3 08:11:17.678: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Aug 3 08:11:17.682: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"645993"},"items":null} + +Aug 3 08:11:17.686: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"645993"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:11:17.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-7625" for this suite. + +• [SLOW TEST:11.423 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":346,"completed":326,"skipped":6429,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:11:17.717: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 +STEP: create the container to handle the HTTPGet hook request. +Aug 3 08:11:17.783: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Aug 3 08:11:19.792: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Aug 3 08:11:21.801: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Aug 3 08:11:23.796: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Aug 3 08:11:23.823: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) +Aug 3 08:11:25.835: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) +Aug 3 08:11:27.836: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) +STEP: delete the pod with lifecycle hook +Aug 3 08:11:27.854: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Aug 3 08:11:27.860: INFO: Pod pod-with-prestop-http-hook still exists +Aug 3 08:11:29.861: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Aug 3 08:11:29.869: INFO: Pod pod-with-prestop-http-hook still exists +Aug 3 08:11:31.863: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Aug 3 08:11:31.876: INFO: Pod pod-with-prestop-http-hook no longer exists +STEP: check prestop hook +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:11:31.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-9374" for this suite. + +• [SLOW TEST:14.194 seconds] +[sig-node] Container Lifecycle Hook +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44 + should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":346,"completed":327,"skipped":6453,"failed":0} +SS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:11:31.911: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-map-4e16dc80-99a9-43dd-a8eb-0f492098004c +STEP: Creating a pod to test consume configMaps +Aug 3 08:11:31.996: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0d19183e-dc59-4aac-80f3-a05a2b53a802" in namespace "projected-6339" to be "Succeeded or Failed" +Aug 3 08:11:32.005: INFO: Pod "pod-projected-configmaps-0d19183e-dc59-4aac-80f3-a05a2b53a802": Phase="Pending", Reason="", readiness=false. Elapsed: 8.584603ms +Aug 3 08:11:34.015: INFO: Pod "pod-projected-configmaps-0d19183e-dc59-4aac-80f3-a05a2b53a802": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019374619s +Aug 3 08:11:36.035: INFO: Pod "pod-projected-configmaps-0d19183e-dc59-4aac-80f3-a05a2b53a802": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039168575s +Aug 3 08:11:38.045: INFO: Pod "pod-projected-configmaps-0d19183e-dc59-4aac-80f3-a05a2b53a802": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049279339s +STEP: Saw pod success +Aug 3 08:11:38.045: INFO: Pod "pod-projected-configmaps-0d19183e-dc59-4aac-80f3-a05a2b53a802" satisfied condition "Succeeded or Failed" +Aug 3 08:11:38.051: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-projected-configmaps-0d19183e-dc59-4aac-80f3-a05a2b53a802 container agnhost-container: +STEP: delete the pod +Aug 3 08:11:38.106: INFO: Waiting for pod pod-projected-configmaps-0d19183e-dc59-4aac-80f3-a05a2b53a802 to disappear +Aug 3 08:11:38.120: INFO: Pod pod-projected-configmaps-0d19183e-dc59-4aac-80f3-a05a2b53a802 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:11:38.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6339" for this suite. + +• [SLOW TEST:6.234 seconds] +[sig-storage] Projected configMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":328,"skipped":6455,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:11:38.149: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 08:11:38.227: INFO: Creating ReplicaSet my-hostname-basic-3c37caf4-78e5-4c4e-9d23-fac44e8ca086 +Aug 3 08:11:38.249: INFO: Pod name my-hostname-basic-3c37caf4-78e5-4c4e-9d23-fac44e8ca086: Found 0 pods out of 1 +Aug 3 08:11:43.263: INFO: Pod name my-hostname-basic-3c37caf4-78e5-4c4e-9d23-fac44e8ca086: Found 1 pods out of 1 +Aug 3 08:11:43.263: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-3c37caf4-78e5-4c4e-9d23-fac44e8ca086" is running +Aug 3 08:11:43.267: INFO: Pod "my-hostname-basic-3c37caf4-78e5-4c4e-9d23-fac44e8ca086-th2pd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-08-03 08:11:38 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-08-03 08:11:41 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-08-03 08:11:41 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-08-03 08:11:38 +0000 UTC Reason: Message:}]) +Aug 3 08:11:43.267: INFO: Trying to dial the pod +Aug 3 08:11:48.293: INFO: Controller my-hostname-basic-3c37caf4-78e5-4c4e-9d23-fac44e8ca086: Got expected result from replica 1 [my-hostname-basic-3c37caf4-78e5-4c4e-9d23-fac44e8ca086-th2pd]: "my-hostname-basic-3c37caf4-78e5-4c4e-9d23-fac44e8ca086-th2pd", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:11:48.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-1207" for this suite. + +• [SLOW TEST:10.162 seconds] +[sig-apps] ReplicaSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":346,"completed":329,"skipped":6472,"failed":0} +S +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with best effort scope. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:11:48.311: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify ResourceQuota with best effort scope. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ResourceQuota with best effort scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a ResourceQuota with not best effort scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a best-effort pod +STEP: Ensuring resource quota with best effort scope captures the pod usage +STEP: Ensuring resource quota with not best effort ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +STEP: Creating a not best-effort pod +STEP: Ensuring resource quota with not best effort scope captures the pod usage +STEP: Ensuring resource quota with best effort scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:12:04.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-7522" for this suite. + +• [SLOW TEST:16.302 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should verify ResourceQuota with best effort scope. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":346,"completed":330,"skipped":6473,"failed":0} +[sig-apps] Deployment + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:12:04.613: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 08:12:04.698: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) +Aug 3 08:12:04.709: INFO: Pod name sample-pod: Found 0 pods out of 1 +Aug 3 08:12:09.716: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Aug 3 08:12:09.716: INFO: Creating deployment "test-rolling-update-deployment" +Aug 3 08:12:09.733: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has +Aug 3 08:12:09.747: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created +Aug 3 08:12:11.767: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected +Aug 3 08:12:11.772: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 8, 12, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 12, 9, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 8, 12, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 12, 9, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-796dbc4547\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 08:12:13.783: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 8, 12, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 12, 9, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 8, 12, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 12, 9, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-796dbc4547\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 08:12:15.786: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Aug 3 08:12:15.828: INFO: Deployment "test-rolling-update-deployment": +&Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2747 77bddd55-5044-486f-aae9-5905a016c6e2 646449 1 2022-08-03 08:12:09 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005a60308 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-08-03 08:12:09 +0000 UTC,LastTransitionTime:2022-08-03 08:12:09 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-796dbc4547" has successfully progressed.,LastUpdateTime:2022-08-03 08:12:14 +0000 UTC,LastTransitionTime:2022-08-03 08:12:09 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Aug 3 08:12:15.840: INFO: New ReplicaSet "test-rolling-update-deployment-796dbc4547" of Deployment "test-rolling-update-deployment": +&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-796dbc4547 deployment-2747 08b233b8-29e8-4e9c-9333-a77e8e5748a3 646439 1 2022-08-03 08:12:09 +0000 UTC map[name:sample-pod pod-template-hash:796dbc4547] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 77bddd55-5044-486f-aae9-5905a016c6e2 0xc005a60787 0xc005a60788}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 796dbc4547,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:796dbc4547] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005a607f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Aug 3 08:12:15.840: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": +Aug 3 08:12:15.841: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2747 b0771506-b2f9-415f-9593-56546f1afa6f 646448 2 2022-08-03 08:12:04 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 77bddd55-5044-486f-aae9-5905a016c6e2 0xc005a606b7 0xc005a606b8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005a60718 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Aug 3 08:12:15.848: INFO: Pod "test-rolling-update-deployment-796dbc4547-d2xrk" is available: +&Pod{ObjectMeta:{test-rolling-update-deployment-796dbc4547-d2xrk test-rolling-update-deployment-796dbc4547- deployment-2747 b65f16db-9d16-4e4e-a102-f177f9327d6d 646438 0 2022-08-03 08:12:09 +0000 UTC map[name:sample-pod pod-template-hash:796dbc4547] map[cni.projectcalico.org/ipv4pools:["default-ipv4-ippool"] dce.daocloud.io/parcel.egress.burst:0 dce.daocloud.io/parcel.egress.rate:0 dce.daocloud.io/parcel.ingress.burst:0 dce.daocloud.io/parcel.ingress.rate:0] [{apps/v1 ReplicaSet test-rolling-update-deployment-796dbc4547 08b233b8-29e8-4e9c-9333-a77e8e5748a3 0xc005a60c57 0xc005a60c58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jwlnb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jwlnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce-10-6-213-50,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 08:12:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 08:12:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 08:12:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-03 08:12:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.6.213.50,PodIP:172.29.175.19,StartTime:2022-08-03 08:12:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-03 08:12:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43,ContainerID:docker://6fa73bb1283541ad4fe6de7128862976d4203fab4ec9a42a6dca6f9db8db24ff,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.29.175.19,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:12:15.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-2747" for this suite. + +• [SLOW TEST:11.273 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":346,"completed":331,"skipped":6473,"failed":0} +SS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:12:15.887: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Failed +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Aug 3 08:12:20.066: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:12:20.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-3242" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":346,"completed":332,"skipped":6475,"failed":0} +SSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny attaching pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:12:20.100: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 3 08:12:20.649: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Aug 3 08:12:22.685: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 8, 12, 20, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 12, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 8, 12, 20, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 12, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 3 08:12:25.702: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny attaching pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the webhook via the AdmissionRegistration API +STEP: create a pod +STEP: 'kubectl attach' the pod, should be denied by the webhook +Aug 3 08:12:29.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=webhook-5453 attach --namespace=webhook-5453 to-be-attached-pod -i -c=container1' +Aug 3 08:12:30.032: INFO: rc: 1 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:12:30.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-5453" for this suite. +STEP: Destroying namespace "webhook-5453-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:10.157 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to deny attaching pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":346,"completed":333,"skipped":6481,"failed":0} +SSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:12:30.257: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-bdd69a1c-5d1f-462f-9979-e9b8983799f1 +STEP: Creating a pod to test consume configMaps +Aug 3 08:12:30.424: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d5beff14-2a2d-48e1-b16c-05b103aa91c5" in namespace "projected-7989" to be "Succeeded or Failed" +Aug 3 08:12:30.432: INFO: Pod "pod-projected-configmaps-d5beff14-2a2d-48e1-b16c-05b103aa91c5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.617662ms +Aug 3 08:12:32.446: INFO: Pod "pod-projected-configmaps-d5beff14-2a2d-48e1-b16c-05b103aa91c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021924445s +Aug 3 08:12:35.401: INFO: Pod "pod-projected-configmaps-d5beff14-2a2d-48e1-b16c-05b103aa91c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.977016543s +STEP: Saw pod success +Aug 3 08:12:35.401: INFO: Pod "pod-projected-configmaps-d5beff14-2a2d-48e1-b16c-05b103aa91c5" satisfied condition "Succeeded or Failed" +Aug 3 08:12:35.427: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-projected-configmaps-d5beff14-2a2d-48e1-b16c-05b103aa91c5 container agnhost-container: +STEP: delete the pod +Aug 3 08:12:35.470: INFO: Waiting for pod pod-projected-configmaps-d5beff14-2a2d-48e1-b16c-05b103aa91c5 to disappear +Aug 3 08:12:35.476: INFO: Pod pod-projected-configmaps-d5beff14-2a2d-48e1-b16c-05b103aa91c5 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:12:35.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7989" for this suite. + +• [SLOW TEST:5.238 seconds] +[sig-storage] Projected configMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":334,"skipped":6486,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:12:35.496: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 +[It] should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 08:12:37.334: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Aug 3 08:12:39.341: INFO: The status of Pod pod-exec-websocket-b9d5e392-d177-4da8-82ee-f2276966f80b is Pending, waiting for it to be Running (with Ready = true) +Aug 3 08:12:41.356: INFO: The status of Pod pod-exec-websocket-b9d5e392-d177-4da8-82ee-f2276966f80b is Pending, waiting for it to be Running (with Ready = true) +Aug 3 08:12:44.238: INFO: The status of Pod pod-exec-websocket-b9d5e392-d177-4da8-82ee-f2276966f80b is Pending, waiting for it to be Running (with Ready = true) +Aug 3 08:12:45.356: INFO: The status of Pod pod-exec-websocket-b9d5e392-d177-4da8-82ee-f2276966f80b is Running (Ready = true) +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:12:45.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-7926" for this suite. + +• [SLOW TEST:10.086 seconds] +[sig-node] Pods +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":346,"completed":335,"skipped":6511,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should have an terminated reason [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:12:45.582: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 +[It] should have an terminated reason [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:12:53.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-189" for this suite. + +• [SLOW TEST:8.141 seconds] +[sig-node] Kubelet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + when scheduling a busybox command that always fails in a pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:79 + should have an terminated reason [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":346,"completed":336,"skipped":6523,"failed":0} +SSSSSSSSS +------------------------------ +[sig-network] IngressClass API + should support creating IngressClass API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:12:53.724: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename ingressclass +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:186 +[It] should support creating IngressClass API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/networking.k8s.io +STEP: getting /apis/networking.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Aug 3 08:12:53.839: INFO: starting watch +STEP: patching +STEP: updating +Aug 3 08:12:53.856: INFO: waiting for watch events with expected annotations +Aug 3 08:12:53.856: INFO: saw patched and updated annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:12:53.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ingressclass-9384" for this suite. +•{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":346,"completed":337,"skipped":6532,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a validating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:12:53.924: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 3 08:12:54.900: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Aug 3 08:12:56.933: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 8, 12, 54, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 12, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 8, 12, 54, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 12, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 3 08:12:58.945: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 3, 8, 12, 54, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 12, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 3, 8, 12, 54, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 3, 8, 12, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 3 08:13:01.960: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a validating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a validating webhook configuration +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Updating a validating webhook configuration's rules to not include the create operation +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Patching a validating webhook configuration's rules to include the create operation +STEP: Creating a configMap that does not comply to the validation webhook rules +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:13:02.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-5268" for this suite. +STEP: Destroying namespace "webhook-5268-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:8.255 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + patching/updating a validating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":346,"completed":338,"skipped":6556,"failed":0} +SSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete RS created by deployment when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:13:02.180: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete RS created by deployment when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for all rs to be garbage collected +STEP: expected 0 pods, got 2 pods +STEP: Gathering metrics +Aug 3 08:13:02.907: INFO: The status of Pod dce-kube-controller-manager-dce-10-6-213-30 is Running (Ready = true) +Aug 3 08:14:03.178: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:14:03.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-3743" for this suite. + +• [SLOW TEST:61.028 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should delete RS created by deployment when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":346,"completed":339,"skipped":6565,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:14:03.208: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-projected-wqzq +STEP: Creating a pod to test atomic-volume-subpath +Aug 3 08:14:03.325: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-wqzq" in namespace "subpath-2063" to be "Succeeded or Failed" +Aug 3 08:14:03.330: INFO: Pod "pod-subpath-test-projected-wqzq": Phase="Pending", Reason="", readiness=false. Elapsed: 5.464592ms +Aug 3 08:14:05.343: INFO: Pod "pod-subpath-test-projected-wqzq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017550329s +Aug 3 08:14:07.358: INFO: Pod "pod-subpath-test-projected-wqzq": Phase="Running", Reason="", readiness=true. Elapsed: 4.033094789s +Aug 3 08:14:09.369: INFO: Pod "pod-subpath-test-projected-wqzq": Phase="Running", Reason="", readiness=true. Elapsed: 6.043983161s +Aug 3 08:14:11.384: INFO: Pod "pod-subpath-test-projected-wqzq": Phase="Running", Reason="", readiness=true. Elapsed: 8.058847157s +Aug 3 08:14:13.396: INFO: Pod "pod-subpath-test-projected-wqzq": Phase="Running", Reason="", readiness=true. Elapsed: 10.070705348s +Aug 3 08:14:15.407: INFO: Pod "pod-subpath-test-projected-wqzq": Phase="Running", Reason="", readiness=true. Elapsed: 12.08236112s +Aug 3 08:14:17.421: INFO: Pod "pod-subpath-test-projected-wqzq": Phase="Running", Reason="", readiness=true. Elapsed: 14.095648176s +Aug 3 08:14:19.432: INFO: Pod "pod-subpath-test-projected-wqzq": Phase="Running", Reason="", readiness=true. Elapsed: 16.107380376s +Aug 3 08:14:21.448: INFO: Pod "pod-subpath-test-projected-wqzq": Phase="Running", Reason="", readiness=true. Elapsed: 18.122772814s +Aug 3 08:14:23.461: INFO: Pod "pod-subpath-test-projected-wqzq": Phase="Running", Reason="", readiness=true. Elapsed: 20.13630465s +Aug 3 08:14:25.474: INFO: Pod "pod-subpath-test-projected-wqzq": Phase="Running", Reason="", readiness=true. Elapsed: 22.149219414s +Aug 3 08:14:27.488: INFO: Pod "pod-subpath-test-projected-wqzq": Phase="Running", Reason="", readiness=true. Elapsed: 24.162668886s +Aug 3 08:14:29.497: INFO: Pod "pod-subpath-test-projected-wqzq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.172376931s +STEP: Saw pod success +Aug 3 08:14:29.497: INFO: Pod "pod-subpath-test-projected-wqzq" satisfied condition "Succeeded or Failed" +Aug 3 08:14:29.508: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-subpath-test-projected-wqzq container test-container-subpath-projected-wqzq: +STEP: delete the pod +Aug 3 08:14:29.591: INFO: Waiting for pod pod-subpath-test-projected-wqzq to disappear +Aug 3 08:14:29.611: INFO: Pod pod-subpath-test-projected-wqzq no longer exists +STEP: Deleting pod pod-subpath-test-projected-wqzq +Aug 3 08:14:29.611: INFO: Deleting pod "pod-subpath-test-projected-wqzq" in namespace "subpath-2063" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:14:29.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-2063" for this suite. + +• [SLOW TEST:26.435 seconds] +[sig-storage] Subpath +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance]","total":346,"completed":340,"skipped":6575,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:14:29.644: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Aug 3 08:14:29.734: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Aug 3 08:14:36.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-8725 --namespace=crd-publish-openapi-8725 create -f -' +Aug 3 08:14:37.303: INFO: stderr: "" +Aug 3 08:14:37.303: INFO: stdout: "e2e-test-crd-publish-openapi-1848-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Aug 3 08:14:37.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-8725 --namespace=crd-publish-openapi-8725 delete e2e-test-crd-publish-openapi-1848-crds test-cr' +Aug 3 08:14:37.430: INFO: stderr: "" +Aug 3 08:14:37.430: INFO: stdout: "e2e-test-crd-publish-openapi-1848-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +Aug 3 08:14:37.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-8725 --namespace=crd-publish-openapi-8725 apply -f -' +Aug 3 08:14:37.729: INFO: stderr: "" +Aug 3 08:14:37.729: INFO: stdout: "e2e-test-crd-publish-openapi-1848-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Aug 3 08:14:37.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-8725 --namespace=crd-publish-openapi-8725 delete e2e-test-crd-publish-openapi-1848-crds test-cr' +Aug 3 08:14:37.857: INFO: stderr: "" +Aug 3 08:14:37.857: INFO: stdout: "e2e-test-crd-publish-openapi-1848-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR without validation schema +Aug 3 08:14:37.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=crd-publish-openapi-8725 explain e2e-test-crd-publish-openapi-1848-crds' +Aug 3 08:14:38.824: INFO: stderr: "" +Aug 3 08:14:38.824: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-1848-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:14:42.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-8725" for this suite. + +• [SLOW TEST:12.946 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":346,"completed":341,"skipped":6579,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:14:42.591: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Aug 3 08:14:42.683: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e1fa3d6-1c4b-4b4d-93d3-6c66e9b529c0" in namespace "projected-8543" to be "Succeeded or Failed" +Aug 3 08:14:42.690: INFO: Pod "downwardapi-volume-9e1fa3d6-1c4b-4b4d-93d3-6c66e9b529c0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.182965ms +Aug 3 08:14:44.700: INFO: Pod "downwardapi-volume-9e1fa3d6-1c4b-4b4d-93d3-6c66e9b529c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016628956s +Aug 3 08:14:46.714: INFO: Pod "downwardapi-volume-9e1fa3d6-1c4b-4b4d-93d3-6c66e9b529c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030397946s +STEP: Saw pod success +Aug 3 08:14:46.714: INFO: Pod "downwardapi-volume-9e1fa3d6-1c4b-4b4d-93d3-6c66e9b529c0" satisfied condition "Succeeded or Failed" +Aug 3 08:14:46.720: INFO: Trying to get logs from node dce-10-6-213-50 pod downwardapi-volume-9e1fa3d6-1c4b-4b4d-93d3-6c66e9b529c0 container client-container: +STEP: delete the pod +Aug 3 08:14:46.754: INFO: Waiting for pod downwardapi-volume-9e1fa3d6-1c4b-4b4d-93d3-6c66e9b529c0 to disappear +Aug 3 08:14:46.765: INFO: Pod downwardapi-volume-9e1fa3d6-1c4b-4b4d-93d3-6c66e9b529c0 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:14:46.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8543" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":346,"completed":342,"skipped":6621,"failed":0} +SSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:14:46.792: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Aug 3 08:14:46.877: INFO: Waiting up to 5m0s for pod "downwardapi-volume-53921cf7-abcc-4ab3-acf5-5d9f9456a029" in namespace "downward-api-8790" to be "Succeeded or Failed" +Aug 3 08:14:46.887: INFO: Pod "downwardapi-volume-53921cf7-abcc-4ab3-acf5-5d9f9456a029": Phase="Pending", Reason="", readiness=false. Elapsed: 9.705278ms +Aug 3 08:14:48.899: INFO: Pod "downwardapi-volume-53921cf7-abcc-4ab3-acf5-5d9f9456a029": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021878514s +Aug 3 08:14:50.911: INFO: Pod "downwardapi-volume-53921cf7-abcc-4ab3-acf5-5d9f9456a029": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034089927s +STEP: Saw pod success +Aug 3 08:14:50.911: INFO: Pod "downwardapi-volume-53921cf7-abcc-4ab3-acf5-5d9f9456a029" satisfied condition "Succeeded or Failed" +Aug 3 08:14:50.917: INFO: Trying to get logs from node dce-10-6-213-50 pod downwardapi-volume-53921cf7-abcc-4ab3-acf5-5d9f9456a029 container client-container: +STEP: delete the pod +Aug 3 08:14:50.962: INFO: Waiting for pod downwardapi-volume-53921cf7-abcc-4ab3-acf5-5d9f9456a029 to disappear +Aug 3 08:14:50.967: INFO: Pod downwardapi-volume-53921cf7-abcc-4ab3-acf5-5d9f9456a029 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:14:50.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-8790" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":343,"skipped":6624,"failed":0} +SSSSSS +------------------------------ +[sig-network] Services + should complete a service status lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:14:50.985: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should complete a service status lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Service +STEP: watching for the Service to be added +Aug 3 08:14:51.072: INFO: Found Service test-service-rz44j in namespace services-6391 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] +Aug 3 08:14:51.072: INFO: Service test-service-rz44j created +STEP: Getting /status +Aug 3 08:14:51.083: INFO: Service test-service-rz44j has LoadBalancer: {[]} +STEP: patching the ServiceStatus +STEP: watching for the Service to be patched +Aug 3 08:14:51.096: INFO: observed Service test-service-rz44j in namespace services-6391 with annotations: map[] & LoadBalancer: {[]} +Aug 3 08:14:51.096: INFO: Found Service test-service-rz44j in namespace services-6391 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} +Aug 3 08:14:51.096: INFO: Service test-service-rz44j has service status patched +STEP: updating the ServiceStatus +Aug 3 08:14:51.115: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Service to be updated +Aug 3 08:14:51.118: INFO: Observed Service test-service-rz44j in namespace services-6391 with annotations: map[] & Conditions: {[]} +Aug 3 08:14:51.118: INFO: Observed event: &Service{ObjectMeta:{test-service-rz44j services-6391 88755818-199f-4a66-bc19-b488f187fa45 647543 0 2022-08-03 08:14:51 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:172.31.68.65,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[172.31.68.65],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} +Aug 3 08:14:51.118: INFO: Found Service test-service-rz44j in namespace services-6391 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Aug 3 08:14:51.118: INFO: Service test-service-rz44j has service status updated +STEP: patching the service +STEP: watching for the Service to be patched +Aug 3 08:14:51.136: INFO: observed Service test-service-rz44j in namespace services-6391 with labels: map[test-service-static:true] +Aug 3 08:14:51.136: INFO: observed Service test-service-rz44j in namespace services-6391 with labels: map[test-service-static:true] +Aug 3 08:14:51.136: INFO: observed Service test-service-rz44j in namespace services-6391 with labels: map[test-service-static:true] +Aug 3 08:14:51.136: INFO: Found Service test-service-rz44j in namespace services-6391 with labels: map[test-service:patched test-service-static:true] +Aug 3 08:14:51.136: INFO: Service test-service-rz44j patched +STEP: deleting the service +STEP: watching for the Service to be deleted +Aug 3 08:14:51.170: INFO: Observed event: ADDED +Aug 3 08:14:51.170: INFO: Observed event: MODIFIED +Aug 3 08:14:51.170: INFO: Observed event: MODIFIED +Aug 3 08:14:51.170: INFO: Observed event: MODIFIED +Aug 3 08:14:51.170: INFO: Found Service test-service-rz44j in namespace services-6391 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] +Aug 3 08:14:51.170: INFO: Service test-service-rz44j deleted +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:14:51.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6391" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":346,"completed":344,"skipped":6630,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:14:51.196: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-map-3bb5143f-97d2-4821-b0fa-256085d5356b +STEP: Creating a pod to test consume configMaps +Aug 3 08:14:51.282: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-71533e4a-21ad-4519-bfe8-ea3bf53c42b9" in namespace "projected-7677" to be "Succeeded or Failed" +Aug 3 08:14:51.289: INFO: Pod "pod-projected-configmaps-71533e4a-21ad-4519-bfe8-ea3bf53c42b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.963708ms +Aug 3 08:14:53.305: INFO: Pod "pod-projected-configmaps-71533e4a-21ad-4519-bfe8-ea3bf53c42b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022612546s +Aug 3 08:14:55.321: INFO: Pod "pod-projected-configmaps-71533e4a-21ad-4519-bfe8-ea3bf53c42b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038610608s +Aug 3 08:14:57.334: INFO: Pod "pod-projected-configmaps-71533e4a-21ad-4519-bfe8-ea3bf53c42b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05204762s +STEP: Saw pod success +Aug 3 08:14:57.334: INFO: Pod "pod-projected-configmaps-71533e4a-21ad-4519-bfe8-ea3bf53c42b9" satisfied condition "Succeeded or Failed" +Aug 3 08:14:57.343: INFO: Trying to get logs from node dce-10-6-213-50 pod pod-projected-configmaps-71533e4a-21ad-4519-bfe8-ea3bf53c42b9 container agnhost-container: +STEP: delete the pod +Aug 3 08:14:57.375: INFO: Waiting for pod pod-projected-configmaps-71533e4a-21ad-4519-bfe8-ea3bf53c42b9 to disappear +Aug 3 08:14:57.381: INFO: Pod pod-projected-configmaps-71533e4a-21ad-4519-bfe8-ea3bf53c42b9 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:14:57.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7677" for this suite. + +• [SLOW TEST:6.203 seconds] +[sig-storage] Projected configMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":345,"skipped":6652,"failed":0} +S +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to NodePort [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 3 08:14:57.399: INFO: >>> kubeConfig: /tmp/kubeconfig-3461818993 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ExternalName to NodePort [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-4525 +STEP: changing the ExternalName service to type=NodePort +STEP: creating replication controller externalname-service in namespace services-4525 +I0803 08:14:57.539281 21 runners.go:193] Created replication controller with name: externalname-service, namespace: services-4525, replica count: 2 +I0803 08:15:00.589904 21 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 3 08:15:03.591: INFO: Creating new exec pod +I0803 08:15:03.591044 21 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 3 08:15:10.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-4525 exec execpod6mhxf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Aug 3 08:15:10.944: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Aug 3 08:15:10.944: INFO: stdout: "" +Aug 3 08:15:11.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-4525 exec execpod6mhxf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Aug 3 08:15:12.210: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Aug 3 08:15:12.210: INFO: stdout: "" +Aug 3 08:15:12.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-4525 exec execpod6mhxf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Aug 3 08:15:13.260: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Aug 3 08:15:13.260: INFO: stdout: "externalname-service-55gtq" +Aug 3 08:15:13.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-4525 exec execpod6mhxf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.64.162 80' +Aug 3 08:15:13.557: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.64.162 80\nConnection to 172.31.64.162 80 port [tcp/http] succeeded!\n" +Aug 3 08:15:13.557: INFO: stdout: "" +Aug 3 08:15:14.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-4525 exec execpod6mhxf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.64.162 80' +Aug 3 08:15:14.899: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.64.162 80\nConnection to 172.31.64.162 80 port [tcp/http] succeeded!\n" +Aug 3 08:15:14.899: INFO: stdout: "" +Aug 3 08:15:15.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-4525 exec execpod6mhxf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.64.162 80' +Aug 3 08:15:15.885: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.64.162 80\nConnection to 172.31.64.162 80 port [tcp/http] succeeded!\n" +Aug 3 08:15:15.885: INFO: stdout: "" +Aug 3 08:15:16.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-4525 exec execpod6mhxf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.64.162 80' +Aug 3 08:15:16.873: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.64.162 80\nConnection to 172.31.64.162 80 port [tcp/http] succeeded!\n" +Aug 3 08:15:16.873: INFO: stdout: "externalname-service-dc9gb" +Aug 3 08:15:16.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-4525 exec execpod6mhxf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.6.213.40 32441' +Aug 3 08:15:17.186: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.6.213.40 32441\nConnection to 10.6.213.40 32441 port [tcp/*] succeeded!\n" +Aug 3 08:15:17.186: INFO: stdout: "externalname-service-dc9gb" +Aug 3 08:15:17.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3461818993 --namespace=services-4525 exec execpod6mhxf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.6.213.50 32441' +Aug 3 08:15:17.650: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.6.213.50 32441\nConnection to 10.6.213.50 32441 port [tcp/*] succeeded!\n" +Aug 3 08:15:17.650: INFO: stdout: "externalname-service-55gtq" +Aug 3 08:15:17.650: INFO: Cleaning up the ExternalName to NodePort test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 3 08:15:17.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-4525" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:20.309 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should be able to change the type from ExternalName to NodePort [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":346,"completed":346,"skipped":6653,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug 3 08:15:17.710: INFO: Running AfterSuite actions on all nodes +Aug 3 08:15:17.710: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 +Aug 3 08:15:17.710: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 +Aug 3 08:15:17.710: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 +Aug 3 08:15:17.710: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 +Aug 3 08:15:17.710: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 +Aug 3 08:15:17.710: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 +Aug 3 08:15:17.710: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 +Aug 3 08:15:17.710: INFO: Running AfterSuite actions on node 1 +Aug 3 08:15:17.710: INFO: Skipping dumping logs from cluster + +JUnit report was created: /tmp/sonobuoy/results/junit_01.xml +{"msg":"Test Suite completed","total":346,"completed":346,"skipped":6696,"failed":0} + +Ran 346 of 7042 Specs in 7135.046 seconds +SUCCESS! -- 346 Passed | 0 Failed | 0 Pending | 6696 Skipped +PASS + +Ginkgo ran 1 suite in 1h58m58.812954978s +Test Suite Passed diff --git a/v1.23/daocloud/junit_01.xml b/v1.23/daocloud/junit_01.xml new file mode 100644 index 0000000000..94526e1f75 --- /dev/null +++ b/v1.23/daocloud/junit_01.xml @@ -0,0 +1,20437 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file