diff --git a/v1.15/dce/PRODUCT.yaml b/v1.15/dce/PRODUCT.yaml new file mode 100644 index 0000000000..1ac484fa3e --- /dev/null +++ b/v1.15/dce/PRODUCT.yaml @@ -0,0 +1,6 @@ +vendor: DaoCloud +name: DaoCloud Enterprise +version: v1.15.3 +website_url: https://www.daocloud.io/dce +documentation_url: https://download.daocloud.io/DaoCloud_Enterprise/DaoCloud_Enterprise/3.1.4 +product_logo_url: https://guide.daocloud.io/download/attachments/524290/global.logo?version=2&modificationDate=1469173304363&api=v2 diff --git a/v1.15/dce/README.md b/v1.15/dce/README.md new file mode 100644 index 0000000000..02023d70fb --- /dev/null +++ b/v1.15/dce/README.md @@ -0,0 +1,28 @@ +# DaoCloud Enterprise + +DaoCloud Enterprise is a platform based on Kubernetes which developed by [DaoCloud](https://www.daocloud.io). + +## How to Reproduce + +First install DaoCloud Enterprise 3.1.4, which is based on Kubernetes 1.15.3. To install DaoCloud Enterprise, run the following commands on CentOS 7.5 System: +``` +sudo su +curl -L https://dce.daocloud.io/DaoCloud_Enterprise/3.1.4/os-requirements > ./os-requirements +chmod +x ./os-requirements +./os-requirements + +bash -c "$(docker run -i --rm daocloud.io/daocloud/dce:3.1.4-31535 install)" +``` +To add more nodes to the cluster, the user need log into DaoCloud Enterprise control panel and follow instructions under node management section. + +After the installation, run ```docker exec -it `docker ps | grep dce-kube-controller | awk '{print$1}'` bash``` to enter the DaoCloud Enterprise Kubernetes controller container. + +The standard tool for running these tests is +[Sonobuoy](https://github.com/heptio/sonobuoy), and the standard way to run +these in your cluster is with `curl -L https://raw.githubusercontent.com/cncf/k8s-conformance/master/sonobuoy-conformance.yaml | kubectl apply -f -`. + +Watch Sonobuoy's logs with `kubectl logs -f -n sonobuoy sonobuoy` and wait for +the line `no-exit was specified, sonobuoy is now blocking`. At this point, use +`kubectl cp` to bring the results to your local machine, expand the tarball, and +retain the 3 files `plugins/e2e/results/{e2e.log,junit.xml,version.txt}`, which will +be included in your submission. diff --git a/v1.15/dce/e2e.log b/v1.15/dce/e2e.log new file mode 100644 index 0000000000..f1a0c06fb8 --- /dev/null +++ b/v1.15/dce/e2e.log @@ -0,0 +1,10859 @@ +I1210 09:57:22.070854 19 test_context.go:406] Using a temporary kubeconfig file from in-cluster config : /tmp/kubeconfig-845205613 +I1210 09:57:22.070965 19 e2e.go:241] Starting e2e run "1e7faa35-c3f3-46a0-bfa5-98bef531e4ca" on Ginkgo node 1 +Running Suite: Kubernetes e2e suite +=================================== +Random Seed: 1575971840 - Will randomize all specs +Will run 215 of 4413 specs + +Dec 10 09:57:22.167: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +Dec 10 09:57:22.169: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable +Dec 10 09:57:22.189: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready +Dec 10 09:57:22.216: INFO: 19 / 19 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) +Dec 10 09:57:22.216: INFO: expected 5 pod replicas in namespace 'kube-system', 5 are Running and Ready. +Dec 10 09:57:22.216: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start +Dec 10 09:57:22.225: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'calico-node' (0 seconds elapsed) +Dec 10 09:57:22.225: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'dce-cloud-provider-manager' (0 seconds elapsed) +Dec 10 09:57:22.225: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) +Dec 10 09:57:22.225: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'node-local-dns' (0 seconds elapsed) +Dec 10 09:57:22.226: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'smokeping' (0 seconds elapsed) +Dec 10 09:57:22.226: INFO: e2e test version: v1.15.3 +Dec 10 09:57:22.227: INFO: kube-apiserver version: v1.15.3 +SSSSSSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not conflict [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 09:57:22.227: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename emptydir-wrapper +Dec 10 09:57:22.269: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. +Dec 10 09:57:22.278: INFO: Found ClusterRoles; assuming RBAC is enabled. +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-976 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not conflict [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Cleaning up the secret +STEP: Cleaning up the configmap +STEP: Cleaning up the pod +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 09:57:24.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-976" for this suite. +Dec 10 09:57:30.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 09:57:30.510: INFO: namespace emptydir-wrapper-976 deletion completed in 6.087598456s + +• [SLOW TEST:8.283 seconds] +[sig-storage] EmptyDir wrapper volumes +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + should not conflict [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SS +------------------------------ +[sig-storage] EmptyDir volumes + pod should support shared volumes between containers [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 09:57:30.510: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-750 +STEP: Waiting for a default service account to be provisioned in namespace +[It] pod should support shared volumes between containers [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating Pod +STEP: Waiting for the pod running +STEP: Geting the pod +STEP: Reading file content from the nginx-container +Dec 10 09:57:34.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 exec pod-sharedvolume-96feb6d9-4319-4367-9061-2fac80e0356f -c busybox-main-container --namespace=emptydir-750 -- cat /usr/share/volumeshare/shareddata.txt' +Dec 10 09:57:35.049: INFO: stderr: "" +Dec 10 09:57:35.049: INFO: stdout: "Hello from the busy-box sub-container\n" +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 09:57:35.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-750" for this suite. +Dec 10 09:57:41.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 09:57:41.146: INFO: namespace emptydir-750 deletion completed in 6.091894994s + +• [SLOW TEST:10.636 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + pod should support shared volumes between containers [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 09:57:41.146: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8851 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating a pod to test downward API volume plugin +Dec 10 09:57:41.299: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6bb513af-b27b-4eb8-be49-e6a70ceff149" in namespace "projected-8851" to be "success or failure" +Dec 10 09:57:41.301: INFO: Pod "downwardapi-volume-6bb513af-b27b-4eb8-be49-e6a70ceff149": Phase="Pending", Reason="", readiness=false. Elapsed: 2.533388ms +Dec 10 09:57:43.306: INFO: Pod "downwardapi-volume-6bb513af-b27b-4eb8-be49-e6a70ceff149": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006814507s +STEP: Saw pod success +Dec 10 09:57:43.306: INFO: Pod "downwardapi-volume-6bb513af-b27b-4eb8-be49-e6a70ceff149" satisfied condition "success or failure" +Dec 10 09:57:43.309: INFO: Trying to get logs from node dce82 pod downwardapi-volume-6bb513af-b27b-4eb8-be49-e6a70ceff149 container client-container: +STEP: delete the pod +Dec 10 09:57:43.325: INFO: Waiting for pod downwardapi-volume-6bb513af-b27b-4eb8-be49-e6a70ceff149 to disappear +Dec 10 09:57:43.330: INFO: Pod downwardapi-volume-6bb513af-b27b-4eb8-be49-e6a70ceff149 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 09:57:43.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8851" for this suite. +Dec 10 09:57:49.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 09:57:49.420: INFO: namespace projected-8851 deletion completed in 6.084285472s + +• [SLOW TEST:8.274 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 09:57:49.420: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9125 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating a pod to test downward API volume plugin +Dec 10 09:57:49.569: INFO: Waiting up to 5m0s for pod "downwardapi-volume-72447da1-6530-456b-b0de-ac989d3c5a33" in namespace "downward-api-9125" to be "success or failure" +Dec 10 09:57:49.572: INFO: Pod "downwardapi-volume-72447da1-6530-456b-b0de-ac989d3c5a33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.646979ms +Dec 10 09:57:51.575: INFO: Pod "downwardapi-volume-72447da1-6530-456b-b0de-ac989d3c5a33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005949278s +STEP: Saw pod success +Dec 10 09:57:51.575: INFO: Pod "downwardapi-volume-72447da1-6530-456b-b0de-ac989d3c5a33" satisfied condition "success or failure" +Dec 10 09:57:51.578: INFO: Trying to get logs from node dce82 pod downwardapi-volume-72447da1-6530-456b-b0de-ac989d3c5a33 container client-container: +STEP: delete the pod +Dec 10 09:57:51.590: INFO: Waiting for pod downwardapi-volume-72447da1-6530-456b-b0de-ac989d3c5a33 to disappear +Dec 10 09:57:51.592: INFO: Pod downwardapi-volume-72447da1-6530-456b-b0de-ac989d3c5a33 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 09:57:51.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9125" for this suite. +Dec 10 09:57:57.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 09:57:57.683: INFO: namespace downward-api-9125 deletion completed in 6.088459025s + +• [SLOW TEST:8.263 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 09:57:57.683: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-7584 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +Dec 10 09:57:57.825: INFO: Creating ReplicaSet my-hostname-basic-f3860c6c-8f41-49de-aa09-9040223304c8 +Dec 10 09:57:57.835: INFO: Pod name my-hostname-basic-f3860c6c-8f41-49de-aa09-9040223304c8: Found 0 pods out of 1 +Dec 10 09:58:02.839: INFO: Pod name my-hostname-basic-f3860c6c-8f41-49de-aa09-9040223304c8: Found 1 pods out of 1 +Dec 10 09:58:02.839: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-f3860c6c-8f41-49de-aa09-9040223304c8" is running +Dec 10 09:58:02.842: INFO: Pod "my-hostname-basic-f3860c6c-8f41-49de-aa09-9040223304c8-ptbsk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-10 09:57:57 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-10 09:57:59 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-10 09:57:59 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-10 09:57:57 +0000 UTC Reason: Message:}]) +Dec 10 09:58:02.842: INFO: Trying to dial the pod +Dec 10 09:58:07.853: INFO: Controller my-hostname-basic-f3860c6c-8f41-49de-aa09-9040223304c8: Got expected result from replica 1 [my-hostname-basic-f3860c6c-8f41-49de-aa09-9040223304c8-ptbsk]: "my-hostname-basic-f3860c6c-8f41-49de-aa09-9040223304c8-ptbsk", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicaSet + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 09:58:07.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-7584" for this suite. +Dec 10 09:58:13.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 09:58:13.939: INFO: namespace replicaset-7584 deletion completed in 6.082098266s + +• [SLOW TEST:16.256 seconds] +[sig-apps] ReplicaSet +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +S +------------------------------ +[sig-storage] Projected configMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 09:58:13.939: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7257 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating configMap with name cm-test-opt-del-7672f3cc-77ab-4797-bc12-9091a2e5c169 +STEP: Creating configMap with name cm-test-opt-upd-6d8577da-e5df-4bbd-a76d-bf8cd2116425 +STEP: Creating the pod +STEP: Deleting configmap cm-test-opt-del-7672f3cc-77ab-4797-bc12-9091a2e5c169 +STEP: Updating configmap cm-test-opt-upd-6d8577da-e5df-4bbd-a76d-bf8cd2116425 +STEP: Creating configMap with name cm-test-opt-create-9be18dbd-d636-4170-9a34-13ca19b3503c +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 09:59:22.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7257" for this suite. +Dec 10 09:59:44.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 09:59:44.546: INFO: namespace projected-7257 deletion completed in 22.074712925s + +• [SLOW TEST:90.606 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl run default + should create an rc or deployment from an image [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 09:59:44.546: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2707 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 +[BeforeEach] [k8s.io] Kubectl run default + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1421 +[It] should create an rc or deployment from an image [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: running the image docker.io/library/nginx:1.14-alpine +Dec 10 09:59:44.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2707' +Dec 10 09:59:44.788: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" +Dec 10 09:59:44.788: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" +STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created +[AfterEach] [k8s.io] Kubectl run default + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1427 +Dec 10 09:59:46.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 delete deployment e2e-test-nginx-deployment --namespace=kubectl-2707' +Dec 10 09:59:46.894: INFO: stderr: "" +Dec 10 09:59:46.894: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 09:59:46.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2707" for this suite. +Dec 10 09:59:52.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 09:59:52.986: INFO: namespace kubectl-2707 deletion completed in 6.087018696s + +• [SLOW TEST:8.440 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl run default + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 + should create an rc or deployment from an image [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +[k8s.io] KubeletManagedEtcHosts + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [k8s.io] KubeletManagedEtcHosts + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 09:59:52.986: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-kubelet-etc-hosts-4193 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Setting up the test +STEP: Creating hostNetwork=false pod +STEP: Creating hostNetwork=true pod +STEP: Running the test +STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false +Dec 10 09:59:59.154: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4193 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 10 09:59:59.154: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +Dec 10 09:59:59.270: INFO: Exec stderr: "" +Dec 10 09:59:59.270: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4193 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 10 09:59:59.270: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +Dec 10 09:59:59.391: INFO: Exec stderr: "" +Dec 10 09:59:59.391: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4193 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 10 09:59:59.391: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +Dec 10 09:59:59.506: INFO: Exec stderr: "" +Dec 10 09:59:59.506: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4193 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 10 09:59:59.506: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +Dec 10 09:59:59.617: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount +Dec 10 09:59:59.617: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4193 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 10 09:59:59.617: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +Dec 10 09:59:59.736: INFO: Exec stderr: "" +Dec 10 09:59:59.736: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4193 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 10 09:59:59.736: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +Dec 10 09:59:59.851: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true +Dec 10 09:59:59.851: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4193 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 10 09:59:59.851: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +Dec 10 09:59:59.968: INFO: Exec stderr: "" +Dec 10 09:59:59.968: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4193 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 10 09:59:59.968: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +Dec 10 10:00:00.087: INFO: Exec stderr: "" +Dec 10 10:00:00.087: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4193 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 10 10:00:00.087: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +Dec 10 10:00:00.193: INFO: Exec stderr: "" +Dec 10 10:00:00.193: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4193 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 10 10:00:00.193: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +Dec 10 10:00:00.304: INFO: Exec stderr: "" +[AfterEach] [k8s.io] KubeletManagedEtcHosts + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:00:00.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-kubelet-etc-hosts-4193" for this suite. +Dec 10 10:00:44.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:00:44.396: INFO: namespace e2e-kubelet-etc-hosts-4193 deletion completed in 44.088302205s + +• [SLOW TEST:51.410 seconds] +[k8s.io] KubeletManagedEtcHosts +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:00:44.397: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-6113 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating configMap with name configmap-test-volume-c33381b9-7231-495c-b599-ed54589a2250 +STEP: Creating a pod to test consume configMaps +Dec 10 10:00:44.550: INFO: Waiting up to 5m0s for pod "pod-configmaps-52008254-df4c-4386-9274-b2b121824fa0" in namespace "configmap-6113" to be "success or failure" +Dec 10 10:00:44.554: INFO: Pod "pod-configmaps-52008254-df4c-4386-9274-b2b121824fa0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136345ms +Dec 10 10:00:46.558: INFO: Pod "pod-configmaps-52008254-df4c-4386-9274-b2b121824fa0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007978139s +Dec 10 10:00:48.562: INFO: Pod "pod-configmaps-52008254-df4c-4386-9274-b2b121824fa0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011431439s +STEP: Saw pod success +Dec 10 10:00:48.562: INFO: Pod "pod-configmaps-52008254-df4c-4386-9274-b2b121824fa0" satisfied condition "success or failure" +Dec 10 10:00:48.565: INFO: Trying to get logs from node dce82 pod pod-configmaps-52008254-df4c-4386-9274-b2b121824fa0 container configmap-volume-test: +STEP: delete the pod +Dec 10 10:00:48.585: INFO: Waiting for pod pod-configmaps-52008254-df4c-4386-9274-b2b121824fa0 to disappear +Dec 10 10:00:48.588: INFO: Pod pod-configmaps-52008254-df4c-4386-9274-b2b121824fa0 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:00:48.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-6113" for this suite. +Dec 10 10:00:54.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:00:54.665: INFO: namespace configmap-6113 deletion completed in 6.074500064s + +• [SLOW TEST:10.269 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should contain environment variables for services [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:00:54.666: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-5734 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 +[It] should contain environment variables for services [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +Dec 10 10:00:58.834: INFO: Waiting up to 5m0s for pod "client-envvars-e92c0226-f276-4d04-b466-e4226af01222" in namespace "pods-5734" to be "success or failure" +Dec 10 10:00:58.837: INFO: Pod "client-envvars-e92c0226-f276-4d04-b466-e4226af01222": Phase="Pending", Reason="", readiness=false. Elapsed: 3.622117ms +Dec 10 10:01:00.841: INFO: Pod "client-envvars-e92c0226-f276-4d04-b466-e4226af01222": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007125812s +STEP: Saw pod success +Dec 10 10:01:00.841: INFO: Pod "client-envvars-e92c0226-f276-4d04-b466-e4226af01222" satisfied condition "success or failure" +Dec 10 10:01:00.844: INFO: Trying to get logs from node dce82 pod client-envvars-e92c0226-f276-4d04-b466-e4226af01222 container env3cont: +STEP: delete the pod +Dec 10 10:01:00.866: INFO: Waiting for pod client-envvars-e92c0226-f276-4d04-b466-e4226af01222 to disappear +Dec 10 10:01:00.870: INFO: Pod client-envvars-e92c0226-f276-4d04-b466-e4226af01222 no longer exists +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:01:00.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-5734" for this suite. +Dec 10 10:01:42.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:01:43.031: INFO: namespace pods-5734 deletion completed in 42.157471914s + +• [SLOW TEST:48.366 seconds] +[k8s.io] Pods +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 + should contain environment variables for services [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:01:43.032: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7965 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating configMap with name projected-configmap-test-volume-c5fd4855-6f9c-43a1-91d8-1a043e006411 +STEP: Creating a pod to test consume configMaps +Dec 10 10:01:43.263: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5ab15f7d-844b-4119-b0db-f92996eca756" in namespace "projected-7965" to be "success or failure" +Dec 10 10:01:43.266: INFO: Pod "pod-projected-configmaps-5ab15f7d-844b-4119-b0db-f92996eca756": Phase="Pending", Reason="", readiness=false. Elapsed: 3.059244ms +Dec 10 10:01:45.270: INFO: Pod "pod-projected-configmaps-5ab15f7d-844b-4119-b0db-f92996eca756": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00659285s +STEP: Saw pod success +Dec 10 10:01:45.270: INFO: Pod "pod-projected-configmaps-5ab15f7d-844b-4119-b0db-f92996eca756" satisfied condition "success or failure" +Dec 10 10:01:45.274: INFO: Trying to get logs from node dce82 pod pod-projected-configmaps-5ab15f7d-844b-4119-b0db-f92996eca756 container projected-configmap-volume-test: +STEP: delete the pod +Dec 10 10:01:45.290: INFO: Waiting for pod pod-projected-configmaps-5ab15f7d-844b-4119-b0db-f92996eca756 to disappear +Dec 10 10:01:45.292: INFO: Pod pod-projected-configmaps-5ab15f7d-844b-4119-b0db-f92996eca756 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:01:45.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7965" for this suite. +Dec 10 10:01:51.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:01:51.390: INFO: namespace projected-7965 deletion completed in 6.094611593s + +• [SLOW TEST:8.358 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:01:51.390: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1274 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating a pod to test downward API volume plugin +Dec 10 10:01:51.533: INFO: Waiting up to 5m0s for pod "downwardapi-volume-889e3764-4e33-4988-b338-012d0aee3d22" in namespace "projected-1274" to be "success or failure" +Dec 10 10:01:51.536: INFO: Pod "downwardapi-volume-889e3764-4e33-4988-b338-012d0aee3d22": Phase="Pending", Reason="", readiness=false. Elapsed: 3.583941ms +Dec 10 10:01:53.540: INFO: Pod "downwardapi-volume-889e3764-4e33-4988-b338-012d0aee3d22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007762795s +STEP: Saw pod success +Dec 10 10:01:53.541: INFO: Pod "downwardapi-volume-889e3764-4e33-4988-b338-012d0aee3d22" satisfied condition "success or failure" +Dec 10 10:01:53.544: INFO: Trying to get logs from node dce82 pod downwardapi-volume-889e3764-4e33-4988-b338-012d0aee3d22 container client-container: +STEP: delete the pod +Dec 10 10:01:53.557: INFO: Waiting for pod downwardapi-volume-889e3764-4e33-4988-b338-012d0aee3d22 to disappear +Dec 10 10:01:53.560: INFO: Pod downwardapi-volume-889e3764-4e33-4988-b338-012d0aee3d22 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:01:53.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1274" for this suite. +Dec 10 10:01:59.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:01:59.642: INFO: namespace projected-1274 deletion completed in 6.079310588s + +• [SLOW TEST:8.252 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSS +------------------------------ +[sig-storage] Projected combined + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-storage] Projected combined + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:01:59.642: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9889 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating configMap with name configmap-projected-all-test-volume-fd4c59c5-9570-4807-b2be-2b00b9c335e6 +STEP: Creating secret with name secret-projected-all-test-volume-a96a5f58-4e2a-456d-a781-7711de511c9c +STEP: Creating a pod to test Check all projections for projected volume plugin +Dec 10 10:01:59.801: INFO: Waiting up to 5m0s for pod "projected-volume-fe3dd6c9-6621-4f93-9487-ec8b024a2295" in namespace "projected-9889" to be "success or failure" +Dec 10 10:01:59.803: INFO: Pod "projected-volume-fe3dd6c9-6621-4f93-9487-ec8b024a2295": Phase="Pending", Reason="", readiness=false. Elapsed: 1.843182ms +Dec 10 10:02:01.806: INFO: Pod "projected-volume-fe3dd6c9-6621-4f93-9487-ec8b024a2295": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004591886s +STEP: Saw pod success +Dec 10 10:02:01.806: INFO: Pod "projected-volume-fe3dd6c9-6621-4f93-9487-ec8b024a2295" satisfied condition "success or failure" +Dec 10 10:02:01.808: INFO: Trying to get logs from node dce82 pod projected-volume-fe3dd6c9-6621-4f93-9487-ec8b024a2295 container projected-all-volume-test: +STEP: delete the pod +Dec 10 10:02:01.819: INFO: Waiting for pod projected-volume-fe3dd6c9-6621-4f93-9487-ec8b024a2295 to disappear +Dec 10 10:02:01.821: INFO: Pod projected-volume-fe3dd6c9-6621-4f93-9487-ec8b024a2295 no longer exists +[AfterEach] [sig-storage] Projected combined + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:02:01.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9889" for this suite. +Dec 10 10:02:07.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:02:07.912: INFO: namespace projected-9889 deletion completed in 6.08876464s + +• [SLOW TEST:8.270 seconds] +[sig-storage] Projected combined +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-network] Networking + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:02:07.913: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-304 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Performing setup for networking test in namespace pod-network-test-304 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Dec 10 10:02:08.060: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +STEP: Creating test pods +Dec 10 10:02:28.120: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.28.8.99:8080/dial?request=hostName&protocol=http&host=172.28.8.102&port=8080&tries=1'] Namespace:pod-network-test-304 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 10 10:02:28.120: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +Dec 10 10:02:28.255: INFO: Waiting for endpoints: map[] +Dec 10 10:02:28.258: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.28.8.99:8080/dial?request=hostName&protocol=http&host=172.28.104.236&port=8080&tries=1'] Namespace:pod-network-test-304 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 10 10:02:28.258: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +Dec 10 10:02:28.385: INFO: Waiting for endpoints: map[] +Dec 10 10:02:28.388: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.28.8.99:8080/dial?request=hostName&protocol=http&host=172.28.194.206&port=8080&tries=1'] Namespace:pod-network-test-304 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 10 10:02:28.388: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +Dec 10 10:02:28.510: INFO: Waiting for endpoints: map[] +[AfterEach] [sig-network] Networking + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:02:28.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-304" for this suite. +Dec 10 10:02:50.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:02:50.598: INFO: namespace pod-network-test-304 deletion completed in 22.084887978s + +• [SLOW TEST:42.686 seconds] +[sig-network] Networking +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 + Granular Checks: Pods + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 + should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:02:50.599: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-2845 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: create the rc1 +STEP: create the rc2 +STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well +STEP: delete the rc simpletest-rc-to-be-deleted +STEP: wait for the rc to be deleted +STEP: Gathering metrics +Dec 10 10:03:00.801: INFO: For apiserver_request_total: +For apiserver_request_latencies_summary: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:03:00.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +W1210 10:03:00.800964 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. +STEP: Destroying namespace "gc-2845" for this suite. +Dec 10 10:03:08.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:03:08.895: INFO: namespace gc-2845 deletion completed in 8.090835551s + +• [SLOW TEST:18.296 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:03:08.895: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1630 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating a pod to test downward API volume plugin +Dec 10 10:03:09.047: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5f4ddcd2-624a-4a4a-a1ab-daa201ff61a7" in namespace "projected-1630" to be "success or failure" +Dec 10 10:03:09.050: INFO: Pod "downwardapi-volume-5f4ddcd2-624a-4a4a-a1ab-daa201ff61a7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.153798ms +Dec 10 10:03:11.054: INFO: Pod "downwardapi-volume-5f4ddcd2-624a-4a4a-a1ab-daa201ff61a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007036824s +Dec 10 10:03:13.058: INFO: Pod "downwardapi-volume-5f4ddcd2-624a-4a4a-a1ab-daa201ff61a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010504327s +STEP: Saw pod success +Dec 10 10:03:13.058: INFO: Pod "downwardapi-volume-5f4ddcd2-624a-4a4a-a1ab-daa201ff61a7" satisfied condition "success or failure" +Dec 10 10:03:13.061: INFO: Trying to get logs from node dce82 pod downwardapi-volume-5f4ddcd2-624a-4a4a-a1ab-daa201ff61a7 container client-container: +STEP: delete the pod +Dec 10 10:03:13.080: INFO: Waiting for pod downwardapi-volume-5f4ddcd2-624a-4a4a-a1ab-daa201ff61a7 to disappear +Dec 10 10:03:13.082: INFO: Pod downwardapi-volume-5f4ddcd2-624a-4a4a-a1ab-daa201ff61a7 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:03:13.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1630" for this suite. +Dec 10 10:03:19.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:03:19.166: INFO: namespace projected-1630 deletion completed in 6.080551705s + +• [SLOW TEST:10.271 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +S +------------------------------ +[sig-apps] Deployment + deployment should delete old replica sets [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:03:19.166: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-4705 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 +[It] deployment should delete old replica sets [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +Dec 10 10:03:19.310: INFO: Pod name cleanup-pod: Found 0 pods out of 1 +Dec 10 10:03:24.318: INFO: Pod name cleanup-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Dec 10 10:03:24.318: INFO: Creating deployment test-cleanup-deployment +STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 +Dec 10 10:03:24.335: INFO: Deployment "test-cleanup-deployment": +&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-4705,SelfLink:/apis/apps/v1/namespaces/deployment-4705/deployments/test-cleanup-deployment,UID:0d4538f5-c4ec-42e3-81f0-e33a9734d24b,ResourceVersion:359500,Generation:1,CreationTimestamp:2019-12-10 10:03:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} + +Dec 10 10:03:24.338: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": +&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-4705,SelfLink:/apis/apps/v1/namespaces/deployment-4705/replicasets/test-cleanup-deployment-55bbcbc84c,UID:239ee5a5-f215-4500-83db-8013e6e1e0b4,ResourceVersion:359502,Generation:1,CreationTimestamp:2019-12-10 10:03:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 0d4538f5-c4ec-42e3-81f0-e33a9734d24b 0xc002bb1027 0xc002bb1028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Dec 10 10:03:24.338: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": +Dec 10 10:03:24.338: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-4705,SelfLink:/apis/apps/v1/namespaces/deployment-4705/replicasets/test-cleanup-controller,UID:1a5904f6-75ed-4bf8-beb1-27105e1d2b47,ResourceVersion:359501,Generation:1,CreationTimestamp:2019-12-10 10:03:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 0d4538f5-c4ec-42e3-81f0-e33a9734d24b 0xc002bb0f57 0xc002bb0f58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} +Dec 10 10:03:24.342: INFO: Pod "test-cleanup-controller-5tndh" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-5tndh,GenerateName:test-cleanup-controller-,Namespace:deployment-4705,SelfLink:/api/v1/namespaces/deployment-4705/pods/test-cleanup-controller-5tndh,UID:440b1c11-9091-45bd-8707-fb7775396d6a,ResourceVersion:359495,Generation:0,CreationTimestamp:2019-12-10 10:03:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 1a5904f6-75ed-4bf8-beb1-27105e1d2b47 0xc002cc40d7 0xc002cc40d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2229 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2229,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-q2229 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce82,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cc4150} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cc4170}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:03:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:03:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:03:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:03:19 +0000 UTC }],Message:,Reason:,HostIP:10.6.135.82,PodIP:172.28.8.108,StartTime:2019-12-10 10:03:19 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-10 10:03:20 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b4fd5276ab433809e7a1155f87dda6eabd4f75fd0922efee146570ec0608e22f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:03:24.342: INFO: Pod "test-cleanup-deployment-55bbcbc84c-5fblk" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-5fblk,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-4705,SelfLink:/api/v1/namespaces/deployment-4705/pods/test-cleanup-deployment-55bbcbc84c-5fblk,UID:a94089fa-57e8-49b4-8a72-4d32f425ed88,ResourceVersion:359503,Generation:0,CreationTimestamp:2019-12-10 10:03:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 239ee5a5-f215-4500-83db-8013e6e1e0b4 0xc002cc4237 0xc002cc4238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2229 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2229,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-q2229 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cc42a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cc42d0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:03:24.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-4705" for this suite. +Dec 10 10:03:30.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:03:30.418: INFO: namespace deployment-4705 deletion completed in 6.072306613s + +• [SLOW TEST:11.252 seconds] +[sig-apps] Deployment +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + deployment should delete old replica sets [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +S +------------------------------ +[sig-storage] Downward API volume + should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:03:30.418: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-1897 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating the pod +Dec 10 10:03:33.120: INFO: Successfully updated pod "labelsupdate59eae695-3296-422d-82d3-bbb8cb5121b5" +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:03:35.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1897" for this suite. +Dec 10 10:03:57.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:03:57.394: INFO: namespace downward-api-1897 deletion completed in 22.248666827s + +• [SLOW TEST:26.977 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop complex daemon [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:03:57.395: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-264 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 +[It] should run and stop complex daemon [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +Dec 10 10:03:57.564: INFO: Creating daemon "daemon-set" with a node selector +STEP: Initially, daemon pods should not be running on any nodes. +Dec 10 10:03:57.571: INFO: Number of nodes with available pods: 0 +Dec 10 10:03:57.571: INFO: Number of running nodes: 0, number of available pods: 0 +STEP: Change node label to blue, check that daemon pod is launched. +Dec 10 10:03:57.587: INFO: Number of nodes with available pods: 0 +Dec 10 10:03:57.587: INFO: Node dce81 is running more than one daemon pod +Dec 10 10:03:58.590: INFO: Number of nodes with available pods: 0 +Dec 10 10:03:58.590: INFO: Node dce81 is running more than one daemon pod +Dec 10 10:03:59.590: INFO: Number of nodes with available pods: 0 +Dec 10 10:03:59.590: INFO: Node dce81 is running more than one daemon pod +Dec 10 10:04:00.590: INFO: Number of nodes with available pods: 1 +Dec 10 10:04:00.590: INFO: Number of running nodes: 1, number of available pods: 1 +STEP: Update the node label to green, and wait for daemons to be unscheduled +Dec 10 10:04:00.600: INFO: Number of nodes with available pods: 1 +Dec 10 10:04:00.600: INFO: Number of running nodes: 0, number of available pods: 1 +Dec 10 10:04:01.603: INFO: Number of nodes with available pods: 0 +Dec 10 10:04:01.603: INFO: Number of running nodes: 0, number of available pods: 0 +STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate +Dec 10 10:04:01.608: INFO: Number of nodes with available pods: 0 +Dec 10 10:04:01.608: INFO: Node dce81 is running more than one daemon pod +Dec 10 10:04:02.612: INFO: Number of nodes with available pods: 0 +Dec 10 10:04:02.612: INFO: Node dce81 is running more than one daemon pod +Dec 10 10:04:03.612: INFO: Number of nodes with available pods: 0 +Dec 10 10:04:03.612: INFO: Node dce81 is running more than one daemon pod +Dec 10 10:04:04.612: INFO: Number of nodes with available pods: 0 +Dec 10 10:04:04.612: INFO: Node dce81 is running more than one daemon pod +Dec 10 10:04:05.613: INFO: Number of nodes with available pods: 0 +Dec 10 10:04:05.613: INFO: Node dce81 is running more than one daemon pod +Dec 10 10:04:06.612: INFO: Number of nodes with available pods: 0 +Dec 10 10:04:06.612: INFO: Node dce81 is running more than one daemon pod +Dec 10 10:04:07.613: INFO: Number of nodes with available pods: 0 +Dec 10 10:04:07.613: INFO: Node dce81 is running more than one daemon pod +Dec 10 10:04:08.611: INFO: Number of nodes with available pods: 0 +Dec 10 10:04:08.611: INFO: Node dce81 is running more than one daemon pod +Dec 10 10:04:09.612: INFO: Number of nodes with available pods: 0 +Dec 10 10:04:09.612: INFO: Node dce81 is running more than one daemon pod +Dec 10 10:04:10.612: INFO: Number of nodes with available pods: 0 +Dec 10 10:04:10.612: INFO: Node dce81 is running more than one daemon pod +Dec 10 10:04:11.692: INFO: Number of nodes with available pods: 0 +Dec 10 10:04:11.692: INFO: Node dce81 is running more than one daemon pod +Dec 10 10:04:12.611: INFO: Number of nodes with available pods: 0 +Dec 10 10:04:12.611: INFO: Node dce81 is running more than one daemon pod +Dec 10 10:04:13.612: INFO: Number of nodes with available pods: 1 +Dec 10 10:04:13.612: INFO: Number of running nodes: 1, number of available pods: 1 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-264, will wait for the garbage collector to delete the pods +Dec 10 10:04:13.673: INFO: Deleting DaemonSet.extensions daemon-set took: 5.519522ms +Dec 10 10:04:14.074: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.411903ms +Dec 10 10:04:17.178: INFO: Number of nodes with available pods: 0 +Dec 10 10:04:17.178: INFO: Number of running nodes: 0, number of available pods: 0 +Dec 10 10:04:17.182: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-264/daemonsets","resourceVersion":"359762"},"items":null} + +Dec 10 10:04:17.184: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-264/pods","resourceVersion":"359762"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:04:17.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-264" for this suite. +Dec 10 10:04:23.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:04:23.296: INFO: namespace daemonsets-264 deletion completed in 6.090900095s + +• [SLOW TEST:25.901 seconds] +[sig-apps] Daemon set [Serial] +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should run and stop complex daemon [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Variable Expansion + should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [k8s.io] Variable Expansion + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:04:23.296: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-6131 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating a pod to test substitution in container's command +Dec 10 10:04:23.440: INFO: Waiting up to 5m0s for pod "var-expansion-b5fb3c11-e469-42e4-9fa0-4f5f58189350" in namespace "var-expansion-6131" to be "success or failure" +Dec 10 10:04:23.443: INFO: Pod "var-expansion-b5fb3c11-e469-42e4-9fa0-4f5f58189350": Phase="Pending", Reason="", readiness=false. Elapsed: 3.650333ms +Dec 10 10:04:25.447: INFO: Pod "var-expansion-b5fb3c11-e469-42e4-9fa0-4f5f58189350": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007123535s +STEP: Saw pod success +Dec 10 10:04:25.447: INFO: Pod "var-expansion-b5fb3c11-e469-42e4-9fa0-4f5f58189350" satisfied condition "success or failure" +Dec 10 10:04:25.449: INFO: Trying to get logs from node dce82 pod var-expansion-b5fb3c11-e469-42e4-9fa0-4f5f58189350 container dapi-container: +STEP: delete the pod +Dec 10 10:04:25.466: INFO: Waiting for pod var-expansion-b5fb3c11-e469-42e4-9fa0-4f5f58189350 to disappear +Dec 10 10:04:25.469: INFO: Pod var-expansion-b5fb3c11-e469-42e4-9fa0-4f5f58189350 no longer exists +[AfterEach] [k8s.io] Variable Expansion + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:04:25.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-6131" for this suite. +Dec 10 10:04:31.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:04:31.562: INFO: namespace var-expansion-6131 deletion completed in 6.088000946s + +• [SLOW TEST:8.265 seconds] +[k8s.io] Variable Expansion +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 + should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +[sig-api-machinery] Secrets + should fail to create secret due to empty secret key [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-api-machinery] Secrets + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:04:31.562: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-1854 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail to create secret due to empty secret key [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating projection with secret that has name secret-emptykey-test-eb26ccde-9bca-4385-b4ad-ab269ab1eef7 +[AfterEach] [sig-api-machinery] Secrets + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:04:31.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-1854" for this suite. +Dec 10 10:04:37.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:04:37.795: INFO: namespace secrets-1854 deletion completed in 6.086621872s + +• [SLOW TEST:6.233 seconds] +[sig-api-machinery] Secrets +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 + should fail to create secret due to empty secret key [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:04:37.795: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-1555 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating a pod to test emptydir 0666 on node default medium +Dec 10 10:04:37.948: INFO: Waiting up to 5m0s for pod "pod-3580137c-7614-4503-85dd-2aa08e3f1702" in namespace "emptydir-1555" to be "success or failure" +Dec 10 10:04:37.951: INFO: Pod "pod-3580137c-7614-4503-85dd-2aa08e3f1702": Phase="Pending", Reason="", readiness=false. Elapsed: 3.72671ms +Dec 10 10:04:39.955: INFO: Pod "pod-3580137c-7614-4503-85dd-2aa08e3f1702": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007289269s +STEP: Saw pod success +Dec 10 10:04:39.955: INFO: Pod "pod-3580137c-7614-4503-85dd-2aa08e3f1702" satisfied condition "success or failure" +Dec 10 10:04:39.958: INFO: Trying to get logs from node dce82 pod pod-3580137c-7614-4503-85dd-2aa08e3f1702 container test-container: +STEP: delete the pod +Dec 10 10:04:39.975: INFO: Waiting for pod pod-3580137c-7614-4503-85dd-2aa08e3f1702 to disappear +Dec 10 10:04:39.977: INFO: Pod pod-3580137c-7614-4503-85dd-2aa08e3f1702 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:04:39.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1555" for this suite. +Dec 10 10:04:45.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:04:46.058: INFO: namespace emptydir-1555 deletion completed in 6.076765012s + +• [SLOW TEST:8.262 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update + should support rolling-update to same image [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:04:46.058: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-3479 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 +[BeforeEach] [k8s.io] Kubectl rolling-update + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1517 +[It] should support rolling-update to same image [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: running the image docker.io/library/nginx:1.14-alpine +Dec 10 10:04:46.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3479' +Dec 10 10:04:46.301: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" +Dec 10 10:04:46.301: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" +STEP: verifying the rc e2e-test-nginx-rc was created +STEP: rolling-update to same image controller +Dec 10 10:04:46.305: INFO: scanned /root for discovery docs: +Dec 10 10:04:46.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3479' +Dec 10 10:05:02.078: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" +Dec 10 10:05:02.078: INFO: stdout: "Created e2e-test-nginx-rc-8fbb47c619974d1ef49c98e7f2d59a82\nScaling up e2e-test-nginx-rc-8fbb47c619974d1ef49c98e7f2d59a82 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-8fbb47c619974d1ef49c98e7f2d59a82 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-8fbb47c619974d1ef49c98e7f2d59a82 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" +Dec 10 10:05:02.078: INFO: stdout: "Created e2e-test-nginx-rc-8fbb47c619974d1ef49c98e7f2d59a82\nScaling up e2e-test-nginx-rc-8fbb47c619974d1ef49c98e7f2d59a82 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-8fbb47c619974d1ef49c98e7f2d59a82 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-8fbb47c619974d1ef49c98e7f2d59a82 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" +STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. +Dec 10 10:05:02.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3479' +Dec 10 10:05:02.167: INFO: stderr: "" +Dec 10 10:05:02.167: INFO: stdout: "e2e-test-nginx-rc-8fbb47c619974d1ef49c98e7f2d59a82-4fsx9 " +Dec 10 10:05:02.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods e2e-test-nginx-rc-8fbb47c619974d1ef49c98e7f2d59a82-4fsx9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3479' +Dec 10 10:05:02.256: INFO: stderr: "" +Dec 10 10:05:02.256: INFO: stdout: "true" +Dec 10 10:05:02.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods e2e-test-nginx-rc-8fbb47c619974d1ef49c98e7f2d59a82-4fsx9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3479' +Dec 10 10:05:02.337: INFO: stderr: "" +Dec 10 10:05:02.337: INFO: stdout: "docker.io/library/nginx:1.14-alpine" +Dec 10 10:05:02.337: INFO: e2e-test-nginx-rc-8fbb47c619974d1ef49c98e7f2d59a82-4fsx9 is verified up and running +[AfterEach] [k8s.io] Kubectl rolling-update + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1523 +Dec 10 10:05:02.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 delete rc e2e-test-nginx-rc --namespace=kubectl-3479' +Dec 10 10:05:02.422: INFO: stderr: "" +Dec 10 10:05:02.422: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:05:02.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-3479" for this suite. +Dec 10 10:05:24.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:05:24.511: INFO: namespace kubectl-3479 deletion completed in 22.083266577s + +• [SLOW TEST:38.453 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl rolling-update + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 + should support rolling-update to same image [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:05:24.511: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-4032 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 +[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Dec 10 10:05:27.185: INFO: Successfully updated pod "pod-update-activedeadlineseconds-1fcb041b-3143-42bc-9203-a9649e3882d1" +Dec 10 10:05:27.185: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-1fcb041b-3143-42bc-9203-a9649e3882d1" in namespace "pods-4032" to be "terminated due to deadline exceeded" +Dec 10 10:05:27.187: INFO: Pod "pod-update-activedeadlineseconds-1fcb041b-3143-42bc-9203-a9649e3882d1": Phase="Running", Reason="", readiness=true. Elapsed: 1.988868ms +Dec 10 10:05:29.190: INFO: Pod "pod-update-activedeadlineseconds-1fcb041b-3143-42bc-9203-a9649e3882d1": Phase="Running", Reason="", readiness=true. Elapsed: 2.004863559s +Dec 10 10:05:31.193: INFO: Pod "pod-update-activedeadlineseconds-1fcb041b-3143-42bc-9203-a9649e3882d1": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.008141783s +Dec 10 10:05:31.193: INFO: Pod "pod-update-activedeadlineseconds-1fcb041b-3143-42bc-9203-a9649e3882d1" satisfied condition "terminated due to deadline exceeded" +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:05:31.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-4032" for this suite. +Dec 10 10:05:37.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:05:37.275: INFO: namespace pods-4032 deletion completed in 6.078113982s + +• [SLOW TEST:12.764 seconds] +[k8s.io] Pods +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-storage] Subpath + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:05:37.275: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-4690 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 +STEP: Setting up data +[It] should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating pod pod-subpath-test-configmap-pt2f +STEP: Creating a pod to test atomic-volume-subpath +Dec 10 10:05:37.430: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-pt2f" in namespace "subpath-4690" to be "success or failure" +Dec 10 10:05:37.433: INFO: Pod "pod-subpath-test-configmap-pt2f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.712806ms +Dec 10 10:05:39.437: INFO: Pod "pod-subpath-test-configmap-pt2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007734517s +Dec 10 10:05:41.441: INFO: Pod "pod-subpath-test-configmap-pt2f": Phase="Running", Reason="", readiness=true. Elapsed: 4.011267085s +Dec 10 10:05:43.445: INFO: Pod "pod-subpath-test-configmap-pt2f": Phase="Running", Reason="", readiness=true. Elapsed: 6.015328571s +Dec 10 10:05:45.449: INFO: Pod "pod-subpath-test-configmap-pt2f": Phase="Running", Reason="", readiness=true. Elapsed: 8.019182131s +Dec 10 10:05:47.452: INFO: Pod "pod-subpath-test-configmap-pt2f": Phase="Running", Reason="", readiness=true. Elapsed: 10.022197033s +Dec 10 10:05:49.455: INFO: Pod "pod-subpath-test-configmap-pt2f": Phase="Running", Reason="", readiness=true. Elapsed: 12.025026137s +Dec 10 10:05:51.458: INFO: Pod "pod-subpath-test-configmap-pt2f": Phase="Running", Reason="", readiness=true. Elapsed: 14.028703711s +Dec 10 10:05:53.462: INFO: Pod "pod-subpath-test-configmap-pt2f": Phase="Running", Reason="", readiness=true. Elapsed: 16.032498265s +Dec 10 10:05:55.466: INFO: Pod "pod-subpath-test-configmap-pt2f": Phase="Running", Reason="", readiness=true. Elapsed: 18.036056721s +Dec 10 10:05:57.470: INFO: Pod "pod-subpath-test-configmap-pt2f": Phase="Running", Reason="", readiness=true. Elapsed: 20.040399535s +Dec 10 10:05:59.474: INFO: Pod "pod-subpath-test-configmap-pt2f": Phase="Running", Reason="", readiness=true. Elapsed: 22.044412169s +Dec 10 10:06:01.478: INFO: Pod "pod-subpath-test-configmap-pt2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.04843271s +STEP: Saw pod success +Dec 10 10:06:01.478: INFO: Pod "pod-subpath-test-configmap-pt2f" satisfied condition "success or failure" +Dec 10 10:06:01.481: INFO: Trying to get logs from node dce82 pod pod-subpath-test-configmap-pt2f container test-container-subpath-configmap-pt2f: +STEP: delete the pod +Dec 10 10:06:01.495: INFO: Waiting for pod pod-subpath-test-configmap-pt2f to disappear +Dec 10 10:06:01.496: INFO: Pod pod-subpath-test-configmap-pt2f no longer exists +STEP: Deleting pod pod-subpath-test-configmap-pt2f +Dec 10 10:06:01.496: INFO: Deleting pod "pod-subpath-test-configmap-pt2f" in namespace "subpath-4690" +[AfterEach] [sig-storage] Subpath + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:06:01.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-4690" for this suite. +Dec 10 10:06:07.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:06:07.581: INFO: namespace subpath-4690 deletion completed in 6.081267079s + +• [SLOW TEST:30.306 seconds] +[sig-storage] Subpath +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + Atomic writer volumes + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 + should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:06:07.581: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3629 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating configMap with name projected-configmap-test-volume-9c972148-4a58-4a1f-bb41-6ab3514602ee +STEP: Creating a pod to test consume configMaps +Dec 10 10:06:07.793: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e08b0603-cb88-481c-a383-5159f6bdb8e2" in namespace "projected-3629" to be "success or failure" +Dec 10 10:06:07.795: INFO: Pod "pod-projected-configmaps-e08b0603-cb88-481c-a383-5159f6bdb8e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.368313ms +Dec 10 10:06:09.799: INFO: Pod "pod-projected-configmaps-e08b0603-cb88-481c-a383-5159f6bdb8e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005931068s +Dec 10 10:06:11.801: INFO: Pod "pod-projected-configmaps-e08b0603-cb88-481c-a383-5159f6bdb8e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008288076s +STEP: Saw pod success +Dec 10 10:06:11.801: INFO: Pod "pod-projected-configmaps-e08b0603-cb88-481c-a383-5159f6bdb8e2" satisfied condition "success or failure" +Dec 10 10:06:11.803: INFO: Trying to get logs from node dce82 pod pod-projected-configmaps-e08b0603-cb88-481c-a383-5159f6bdb8e2 container projected-configmap-volume-test: +STEP: delete the pod +Dec 10 10:06:11.817: INFO: Waiting for pod pod-projected-configmaps-e08b0603-cb88-481c-a383-5159f6bdb8e2 to disappear +Dec 10 10:06:11.820: INFO: Pod pod-projected-configmaps-e08b0603-cb88-481c-a383-5159f6bdb8e2 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:06:11.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3629" for this suite. +Dec 10 10:06:17.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:06:17.912: INFO: namespace projected-3629 deletion completed in 6.088047338s + +• [SLOW TEST:10.331 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:06:17.912: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9738 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating projection with secret that has name projected-secret-test-9b3a759e-4fdd-431a-b1c1-fb91327668a1 +STEP: Creating a pod to test consume secrets +Dec 10 10:06:18.062: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-76caafbc-5feb-437e-9943-594cae5542db" in namespace "projected-9738" to be "success or failure" +Dec 10 10:06:18.066: INFO: Pod "pod-projected-secrets-76caafbc-5feb-437e-9943-594cae5542db": Phase="Pending", Reason="", readiness=false. Elapsed: 3.07016ms +Dec 10 10:06:20.070: INFO: Pod "pod-projected-secrets-76caafbc-5feb-437e-9943-594cae5542db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007573974s +Dec 10 10:06:22.075: INFO: Pod "pod-projected-secrets-76caafbc-5feb-437e-9943-594cae5542db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01254795s +STEP: Saw pod success +Dec 10 10:06:22.075: INFO: Pod "pod-projected-secrets-76caafbc-5feb-437e-9943-594cae5542db" satisfied condition "success or failure" +Dec 10 10:06:22.078: INFO: Trying to get logs from node dce82 pod pod-projected-secrets-76caafbc-5feb-437e-9943-594cae5542db container projected-secret-volume-test: +STEP: delete the pod +Dec 10 10:06:22.093: INFO: Waiting for pod pod-projected-secrets-76caafbc-5feb-437e-9943-594cae5542db to disappear +Dec 10 10:06:22.153: INFO: Pod pod-projected-secrets-76caafbc-5feb-437e-9943-594cae5542db no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:06:22.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9738" for this suite. +Dec 10 10:06:28.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:06:28.249: INFO: namespace projected-9738 deletion completed in 6.09222701s + +• [SLOW TEST:10.337 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:06:28.249: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-7853 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 +STEP: Creating service test in namespace statefulset-7853 +[It] should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating a new StatefulSet +Dec 10 10:06:28.404: INFO: Found 0 stateful pods, waiting for 3 +Dec 10 10:06:38.410: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Dec 10 10:06:38.410: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Dec 10 10:06:38.410: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine +Dec 10 10:06:38.439: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Not applying an update when the partition is greater than the number of replicas +STEP: Performing a canary update +Dec 10 10:06:48.474: INFO: Updating stateful set ss2 +Dec 10 10:06:48.481: INFO: Waiting for Pod statefulset-7853/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c +Dec 10 10:06:58.489: INFO: Waiting for Pod statefulset-7853/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c +STEP: Restoring Pods to the correct revision when they are deleted +Dec 10 10:07:08.517: INFO: Found 2 stateful pods, waiting for 3 +Dec 10 10:07:18.521: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Dec 10 10:07:18.522: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Dec 10 10:07:18.522: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Performing a phased rolling update +Dec 10 10:07:18.550: INFO: Updating stateful set ss2 +Dec 10 10:07:18.556: INFO: Waiting for Pod statefulset-7853/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c +Dec 10 10:07:28.584: INFO: Updating stateful set ss2 +Dec 10 10:07:28.592: INFO: Waiting for StatefulSet statefulset-7853/ss2 to complete update +Dec 10 10:07:28.592: INFO: Waiting for Pod statefulset-7853/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c +Dec 10 10:07:38.597: INFO: Waiting for StatefulSet statefulset-7853/ss2 to complete update +Dec 10 10:07:38.597: INFO: Waiting for Pod statefulset-7853/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 +Dec 10 10:07:48.598: INFO: Deleting all statefulset in ns statefulset-7853 +Dec 10 10:07:48.601: INFO: Scaling statefulset ss2 to 0 +Dec 10 10:08:18.615: INFO: Waiting for statefulset status.replicas updated to 0 +Dec 10 10:08:18.617: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:08:18.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-7853" for this suite. +Dec 10 10:08:24.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:08:24.712: INFO: namespace statefulset-7853 deletion completed in 6.080368253s + +• [SLOW TEST:116.463 seconds] +[sig-apps] StatefulSet +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:08:24.713: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-5985 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating secret with name secret-test-b48a3ac7-88e5-4d1e-b3d0-573f41d5e307 +STEP: Creating a pod to test consume secrets +Dec 10 10:08:24.915: INFO: Waiting up to 5m0s for pod "pod-secrets-fa1c1a0f-897c-4a68-aa2f-d97ee4c500bc" in namespace "secrets-5985" to be "success or failure" +Dec 10 10:08:24.917: INFO: Pod "pod-secrets-fa1c1a0f-897c-4a68-aa2f-d97ee4c500bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.54798ms +Dec 10 10:08:26.922: INFO: Pod "pod-secrets-fa1c1a0f-897c-4a68-aa2f-d97ee4c500bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00706437s +Dec 10 10:08:28.925: INFO: Pod "pod-secrets-fa1c1a0f-897c-4a68-aa2f-d97ee4c500bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010398778s +STEP: Saw pod success +Dec 10 10:08:28.925: INFO: Pod "pod-secrets-fa1c1a0f-897c-4a68-aa2f-d97ee4c500bc" satisfied condition "success or failure" +Dec 10 10:08:28.929: INFO: Trying to get logs from node dce82 pod pod-secrets-fa1c1a0f-897c-4a68-aa2f-d97ee4c500bc container secret-volume-test: +STEP: delete the pod +Dec 10 10:08:28.948: INFO: Waiting for pod pod-secrets-fa1c1a0f-897c-4a68-aa2f-d97ee4c500bc to disappear +Dec 10 10:08:28.951: INFO: Pod pod-secrets-fa1c1a0f-897c-4a68-aa2f-d97ee4c500bc no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:08:28.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-5985" for this suite. +Dec 10 10:08:34.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:08:35.045: INFO: namespace secrets-5985 deletion completed in 6.090847197s + +• [SLOW TEST:10.332 seconds] +[sig-storage] Secrets +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Docker Containers + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [k8s.io] Docker Containers + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:08:35.045: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-6168 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating a pod to test use defaults +Dec 10 10:08:35.196: INFO: Waiting up to 5m0s for pod "client-containers-434d1bbc-f4b0-4fe4-9184-89fafc966861" in namespace "containers-6168" to be "success or failure" +Dec 10 10:08:35.198: INFO: Pod "client-containers-434d1bbc-f4b0-4fe4-9184-89fafc966861": Phase="Pending", Reason="", readiness=false. Elapsed: 2.669819ms +Dec 10 10:08:37.202: INFO: Pod "client-containers-434d1bbc-f4b0-4fe4-9184-89fafc966861": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006470609s +Dec 10 10:08:39.205: INFO: Pod "client-containers-434d1bbc-f4b0-4fe4-9184-89fafc966861": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009547899s +STEP: Saw pod success +Dec 10 10:08:39.205: INFO: Pod "client-containers-434d1bbc-f4b0-4fe4-9184-89fafc966861" satisfied condition "success or failure" +Dec 10 10:08:39.208: INFO: Trying to get logs from node dce82 pod client-containers-434d1bbc-f4b0-4fe4-9184-89fafc966861 container test-container: +STEP: delete the pod +Dec 10 10:08:39.223: INFO: Waiting for pod client-containers-434d1bbc-f4b0-4fe4-9184-89fafc966861 to disappear +Dec 10 10:08:39.226: INFO: Pod client-containers-434d1bbc-f4b0-4fe4-9184-89fafc966861 no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:08:39.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-6168" for this suite. +Dec 10 10:08:45.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:08:45.323: INFO: namespace containers-6168 deletion completed in 6.090394331s + +• [SLOW TEST:10.277 seconds] +[k8s.io] Docker Containers +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not be blocked by dependency circle [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:08:45.323: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-9422 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be blocked by dependency circle [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +Dec 10 10:08:45.488: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"e2d9ee0f-ff33-4263-9776-9e9c4e55e133", Controller:(*bool)(0xc002b7e38a), BlockOwnerDeletion:(*bool)(0xc002b7e38b)}} +Dec 10 10:08:45.492: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"7bccac40-eb28-4725-8400-3aff9f2905df", Controller:(*bool)(0xc002b7e52a), BlockOwnerDeletion:(*bool)(0xc002b7e52b)}} +Dec 10 10:08:45.495: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"5ea7edea-dcb5-4913-96e2-9a5803081662", Controller:(*bool)(0xc002b7e6ca), BlockOwnerDeletion:(*bool)(0xc002b7e6cb)}} +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:08:50.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-9422" for this suite. +Dec 10 10:08:56.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:08:56.600: INFO: namespace gc-9422 deletion completed in 6.09601618s + +• [SLOW TEST:11.277 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should not be blocked by dependency circle [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:08:56.600: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-7535 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Given a Pod with a 'name' label pod-adoption-release is created +STEP: When a replicaset with a matching selector is created +STEP: Then the orphan pod is adopted +STEP: When the matched label of one of its pods change +Dec 10 10:09:01.765: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicaSet + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:09:02.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-7535" for this suite. +Dec 10 10:09:24.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:09:24.870: INFO: namespace replicaset-7535 deletion completed in 22.087928014s + +• [SLOW TEST:28.270 seconds] +[sig-apps] ReplicaSet +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +S +------------------------------ +[sig-apps] Deployment + deployment should support proportional scaling [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:09:24.870: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-5410 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 +[It] deployment should support proportional scaling [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +Dec 10 10:09:25.015: INFO: Creating deployment "nginx-deployment" +Dec 10 10:09:25.018: INFO: Waiting for observed generation 1 +Dec 10 10:09:27.027: INFO: Waiting for all required pods to come up +Dec 10 10:09:27.033: INFO: Pod name nginx: Found 10 pods out of 10 +STEP: ensuring each pod is running +Dec 10 10:09:31.042: INFO: Waiting for deployment "nginx-deployment" to complete +Dec 10 10:09:31.048: INFO: Updating deployment "nginx-deployment" with a non-existent image +Dec 10 10:09:31.055: INFO: Updating deployment nginx-deployment +Dec 10 10:09:31.055: INFO: Waiting for observed generation 2 +Dec 10 10:09:33.062: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 +Dec 10 10:09:33.065: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 +Dec 10 10:09:33.067: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas +Dec 10 10:09:33.075: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 +Dec 10 10:09:33.075: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 +Dec 10 10:09:33.078: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas +Dec 10 10:09:33.082: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas +Dec 10 10:09:33.082: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 +Dec 10 10:09:33.087: INFO: Updating deployment nginx-deployment +Dec 10 10:09:33.087: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas +Dec 10 10:09:33.090: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 +Dec 10 10:09:33.092: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 +Dec 10 10:09:33.100: INFO: Deployment "nginx-deployment": +&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-5410,SelfLink:/apis/apps/v1/namespaces/deployment-5410/deployments/nginx-deployment,UID:f2acfcc2-cb96-4a9a-a75a-ba982f72975f,ResourceVersion:361484,Generation:3,CreationTimestamp:2019-12-10 10:09:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[{Progressing True 2019-12-10 10:09:31 +0000 UTC 2019-12-10 10:09:25 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2019-12-10 10:09:33 +0000 UTC 2019-12-10 10:09:33 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} + +Dec 10 10:09:33.107: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": +&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-5410,SelfLink:/apis/apps/v1/namespaces/deployment-5410/replicasets/nginx-deployment-55fb7cb77f,UID:045cc4a1-6f8d-46f9-bfe0-e4fad1922619,ResourceVersion:361481,Generation:3,CreationTimestamp:2019-12-10 10:09:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f2acfcc2-cb96-4a9a-a75a-ba982f72975f 0xc0025e5dd7 0xc0025e5dd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Dec 10 10:09:33.107: INFO: All old ReplicaSets of Deployment "nginx-deployment": +Dec 10 10:09:33.108: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-5410,SelfLink:/apis/apps/v1/namespaces/deployment-5410/replicasets/nginx-deployment-7b8c6f4498,UID:c5feac8f-e72f-4951-8333-df63c8202e3b,ResourceVersion:361479,Generation:3,CreationTimestamp:2019-12-10 10:09:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f2acfcc2-cb96-4a9a-a75a-ba982f72975f 0xc0025e5ec7 0xc0025e5ec8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} +Dec 10 10:09:33.121: INFO: Pod "nginx-deployment-55fb7cb77f-6kv74" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6kv74,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-55fb7cb77f-6kv74,UID:7496a710-840f-4f0b-a4b8-997a9f272336,ResourceVersion:361504,Generation:0,CreationTimestamp:2019-12-10 10:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 045cc4a1-6f8d-46f9-bfe0-e4fad1922619 0xc002e10fc7 0xc002e10fc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e11030} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e11050}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.121: INFO: Pod "nginx-deployment-55fb7cb77f-74bkx" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-74bkx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-55fb7cb77f-74bkx,UID:56726d0d-4cda-4101-a756-1fb2075a8126,ResourceVersion:361503,Generation:0,CreationTimestamp:2019-12-10 10:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 045cc4a1-6f8d-46f9-bfe0-e4fad1922619 0xc002e110b0 0xc002e110b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce82,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e11130} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e11150}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.121: INFO: Pod "nginx-deployment-55fb7cb77f-8jght" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8jght,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-55fb7cb77f-8jght,UID:24a0b559-3aec-4e7f-a427-fd258c80ff8c,ResourceVersion:361506,Generation:0,CreationTimestamp:2019-12-10 10:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 045cc4a1-6f8d-46f9-bfe0-e4fad1922619 0xc002e111d0 0xc002e111d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce83,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e11250} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e11270}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.121: INFO: Pod "nginx-deployment-55fb7cb77f-fqqnt" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fqqnt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-55fb7cb77f-fqqnt,UID:5a4b03e2-e509-43e6-973c-fdedfcc1c65a,ResourceVersion:361428,Generation:0,CreationTimestamp:2019-12-10 10:09:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 045cc4a1-6f8d-46f9-bfe0-e4fad1922619 0xc002e112e0 0xc002e112e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce83,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e11360} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e11380}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:31 +0000 UTC }],Message:,Reason:,HostIP:10.6.135.83,PodIP:,StartTime:2019-12-10 10:09:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.121: INFO: Pod "nginx-deployment-55fb7cb77f-ftwpq" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ftwpq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-55fb7cb77f-ftwpq,UID:1d8e8060-65d4-433b-8d08-fef209369aa9,ResourceVersion:361502,Generation:0,CreationTimestamp:2019-12-10 10:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 045cc4a1-6f8d-46f9-bfe0-e4fad1922619 0xc002e11450 0xc002e11451}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e114c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e114e0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.122: INFO: Pod "nginx-deployment-55fb7cb77f-gd5sh" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gd5sh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-55fb7cb77f-gd5sh,UID:c2abfec4-d20a-453e-94e8-ae7317c91363,ResourceVersion:361446,Generation:0,CreationTimestamp:2019-12-10 10:09:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 045cc4a1-6f8d-46f9-bfe0-e4fad1922619 0xc002e11540 0xc002e11541}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce83,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e115c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e115e0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:31 +0000 UTC }],Message:,Reason:,HostIP:10.6.135.83,PodIP:,StartTime:2019-12-10 10:09:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.122: INFO: Pod "nginx-deployment-55fb7cb77f-gshbd" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gshbd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-55fb7cb77f-gshbd,UID:1aca7529-e0f0-4b5d-9457-69e48f433f7f,ResourceVersion:361491,Generation:0,CreationTimestamp:2019-12-10 10:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 045cc4a1-6f8d-46f9-bfe0-e4fad1922619 0xc002e116a0 0xc002e116a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce81,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e11720} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e11740}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.122: INFO: Pod "nginx-deployment-55fb7cb77f-k4ngt" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-k4ngt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-55fb7cb77f-k4ngt,UID:97a591a2-0510-49c8-9a0b-75f61ebdd253,ResourceVersion:361421,Generation:0,CreationTimestamp:2019-12-10 10:09:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 045cc4a1-6f8d-46f9-bfe0-e4fad1922619 0xc002e117b0 0xc002e117b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce82,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e11830} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e11850}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:31 +0000 UTC }],Message:,Reason:,HostIP:10.6.135.82,PodIP:,StartTime:2019-12-10 10:09:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.122: INFO: Pod "nginx-deployment-55fb7cb77f-lb4hf" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lb4hf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-55fb7cb77f-lb4hf,UID:3cc2cf90-65ee-43b3-8f39-7a94f90f218e,ResourceVersion:361507,Generation:0,CreationTimestamp:2019-12-10 10:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 045cc4a1-6f8d-46f9-bfe0-e4fad1922619 0xc002e11910 0xc002e11911}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e11980} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e119a0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.122: INFO: Pod "nginx-deployment-55fb7cb77f-t7nlv" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-t7nlv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-55fb7cb77f-t7nlv,UID:2338446a-d13f-42fd-bfde-fc1693bdf47a,ResourceVersion:361505,Generation:0,CreationTimestamp:2019-12-10 10:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 045cc4a1-6f8d-46f9-bfe0-e4fad1922619 0xc002e11a00 0xc002e11a01}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e11a70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e11a90}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.123: INFO: Pod "nginx-deployment-55fb7cb77f-thjw4" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-thjw4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-55fb7cb77f-thjw4,UID:dc433931-153a-4975-9282-375c099b0410,ResourceVersion:361444,Generation:0,CreationTimestamp:2019-12-10 10:09:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 045cc4a1-6f8d-46f9-bfe0-e4fad1922619 0xc002e11af0 0xc002e11af1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce81,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e11b70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e11b90}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:31 +0000 UTC }],Message:,Reason:,HostIP:10.6.135.81,PodIP:,StartTime:2019-12-10 10:09:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.123: INFO: Pod "nginx-deployment-55fb7cb77f-z5kv8" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-z5kv8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-55fb7cb77f-z5kv8,UID:5d09f0a8-a418-4f51-a109-b41085b05437,ResourceVersion:361443,Generation:0,CreationTimestamp:2019-12-10 10:09:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 045cc4a1-6f8d-46f9-bfe0-e4fad1922619 0xc002e11c50 0xc002e11c51}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce82,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e11cd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e11cf0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:31 +0000 UTC }],Message:,Reason:,HostIP:10.6.135.82,PodIP:,StartTime:2019-12-10 10:09:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.123: INFO: Pod "nginx-deployment-7b8c6f4498-54d8m" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-54d8m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-7b8c6f4498-54d8m,UID:39cb9f75-f0b7-41fb-91a7-18937277c0c3,ResourceVersion:361394,Generation:0,CreationTimestamp:2019-12-10 10:09:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c5feac8f-e72f-4951-8333-df63c8202e3b 0xc002e11db0 0xc002e11db1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce81,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e11e20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e11e50}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:25 +0000 UTC }],Message:,Reason:,HostIP:10.6.135.81,PodIP:172.28.194.215,StartTime:2019-12-10 10:09:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-10 10:09:28 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://910495073a96e039fe505b37276299872388daa474cacd5343e67ba63c86661d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.123: INFO: Pod "nginx-deployment-7b8c6f4498-6ftz6" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6ftz6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-7b8c6f4498-6ftz6,UID:1ca9ae79-a825-4b3a-9829-1773dc692944,ResourceVersion:361499,Generation:0,CreationTimestamp:2019-12-10 10:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c5feac8f-e72f-4951-8333-df63c8202e3b 0xc002e11f10 0xc002e11f11}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce83,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e11f80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e11fa0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.124: INFO: Pod "nginx-deployment-7b8c6f4498-8pxrz" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8pxrz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-7b8c6f4498-8pxrz,UID:b9ecbb55-1bf8-45ab-a762-2074246130f3,ResourceVersion:361501,Generation:0,CreationTimestamp:2019-12-10 10:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c5feac8f-e72f-4951-8333-df63c8202e3b 0xc002a5a020 0xc002a5a021}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a5a080} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a5a0a0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.124: INFO: Pod "nginx-deployment-7b8c6f4498-9978q" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9978q,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-7b8c6f4498-9978q,UID:b9e54f49-be0c-4b8c-839f-f74b8b64f0fc,ResourceVersion:361344,Generation:0,CreationTimestamp:2019-12-10 10:09:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c5feac8f-e72f-4951-8333-df63c8202e3b 0xc002a5a100 0xc002a5a101}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce82,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a5a170} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a5a190}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:25 +0000 UTC }],Message:,Reason:,HostIP:10.6.135.82,PodIP:172.28.8.65,StartTime:2019-12-10 10:09:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-10 10:09:26 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8b564ecc2fab0b088cc84febd19a27ea23149b77e8e4811c0dbcf637b36594f9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.124: INFO: Pod "nginx-deployment-7b8c6f4498-9nzsw" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9nzsw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-7b8c6f4498-9nzsw,UID:2aec0769-8787-49bb-b155-8729d0a7dd7d,ResourceVersion:361497,Generation:0,CreationTimestamp:2019-12-10 10:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c5feac8f-e72f-4951-8333-df63c8202e3b 0xc002a5a250 0xc002a5a251}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a5a2b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a5a2d0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.124: INFO: Pod "nginx-deployment-7b8c6f4498-gwmw2" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gwmw2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-7b8c6f4498-gwmw2,UID:2ec50bd0-8870-4ea7-94b6-bfa5f4f7e77c,ResourceVersion:361511,Generation:0,CreationTimestamp:2019-12-10 10:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c5feac8f-e72f-4951-8333-df63c8202e3b 0xc002a5a330 0xc002a5a331}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce82,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a5a3a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a5a3c0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.124: INFO: Pod "nginx-deployment-7b8c6f4498-hww6x" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hww6x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-7b8c6f4498-hww6x,UID:c9e099ca-9eb9-410a-8c35-d170a644b82b,ResourceVersion:361340,Generation:0,CreationTimestamp:2019-12-10 10:09:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c5feac8f-e72f-4951-8333-df63c8202e3b 0xc002a5a430 0xc002a5a431}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce82,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a5a4a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a5a4c0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:25 +0000 UTC }],Message:,Reason:,HostIP:10.6.135.82,PodIP:172.28.8.127,StartTime:2019-12-10 10:09:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-10 10:09:26 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://141f6632d49f289b359dbb4de595330ddbc959fcab9785b2ed7eebc31fad4a36}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.124: INFO: Pod "nginx-deployment-7b8c6f4498-jlx9q" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jlx9q,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-7b8c6f4498-jlx9q,UID:ef7a0d20-2d19-4c00-87d9-7ef0ff4b349e,ResourceVersion:361375,Generation:0,CreationTimestamp:2019-12-10 10:09:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c5feac8f-e72f-4951-8333-df63c8202e3b 0xc002a5a580 0xc002a5a581}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce82,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a5a5f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a5a610}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:25 +0000 UTC }],Message:,Reason:,HostIP:10.6.135.82,PodIP:172.28.8.66,StartTime:2019-12-10 10:09:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-10 10:09:27 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://67c3dcd81329a1857b76a0b9c518f50f6da4ea06af7a676fd5df1d17d469c6f1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.125: INFO: Pod "nginx-deployment-7b8c6f4498-kd5nr" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kd5nr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-7b8c6f4498-kd5nr,UID:ac1b422f-774e-4657-86ac-c7fe9210c950,ResourceVersion:361363,Generation:0,CreationTimestamp:2019-12-10 10:09:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c5feac8f-e72f-4951-8333-df63c8202e3b 0xc002a5a6e0 0xc002a5a6e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce83,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a5a750} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a5a770}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:25 +0000 UTC }],Message:,Reason:,HostIP:10.6.135.83,PodIP:172.28.104.243,StartTime:2019-12-10 10:09:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-10 10:09:26 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://19c2d465fe9c303dae9133be8c8684ab38a73d3c4cb5ea8f0eec74c71ddebff7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.125: INFO: Pod "nginx-deployment-7b8c6f4498-kkh7m" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kkh7m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-7b8c6f4498-kkh7m,UID:0fd1d6ec-c438-463a-826a-0da80a8ae9e0,ResourceVersion:361488,Generation:0,CreationTimestamp:2019-12-10 10:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c5feac8f-e72f-4951-8333-df63c8202e3b 0xc002a5a830 0xc002a5a831}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce81,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a5a8a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a5a8c0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.125: INFO: Pod "nginx-deployment-7b8c6f4498-rmmhp" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rmmhp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-7b8c6f4498-rmmhp,UID:d2e0ae2f-993e-4458-8858-69f6ee7f2ed6,ResourceVersion:361366,Generation:0,CreationTimestamp:2019-12-10 10:09:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c5feac8f-e72f-4951-8333-df63c8202e3b 0xc002a5a930 0xc002a5a931}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce83,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a5a9a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a5a9c0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:25 +0000 UTC }],Message:,Reason:,HostIP:10.6.135.83,PodIP:172.28.104.239,StartTime:2019-12-10 10:09:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-10 10:09:26 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://bfbd603001ba93de20b7f963f04245e997ef35213f93cc5e3f917ef046357dd4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.125: INFO: Pod "nginx-deployment-7b8c6f4498-sk9nb" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sk9nb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-7b8c6f4498-sk9nb,UID:bba187fc-4889-4962-9446-e2b0501d7e70,ResourceVersion:361360,Generation:0,CreationTimestamp:2019-12-10 10:09:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c5feac8f-e72f-4951-8333-df63c8202e3b 0xc002a5aa80 0xc002a5aa81}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce83,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a5aaf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a5ab10}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:25 +0000 UTC }],Message:,Reason:,HostIP:10.6.135.83,PodIP:172.28.104.244,StartTime:2019-12-10 10:09:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-10 10:09:26 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3292e5534fb910b0d2878cb37d0c840bffbe0501999ebc6570682857f43fe921}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.125: INFO: Pod "nginx-deployment-7b8c6f4498-ttcmq" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ttcmq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-7b8c6f4498-ttcmq,UID:ce31b6ad-a78a-4eeb-a101-c9a7db85954e,ResourceVersion:361378,Generation:0,CreationTimestamp:2019-12-10 10:09:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c5feac8f-e72f-4951-8333-df63c8202e3b 0xc002a5abd0 0xc002a5abd1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce82,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a5ac40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a5ac60}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:25 +0000 UTC }],Message:,Reason:,HostIP:10.6.135.82,PodIP:172.28.8.68,StartTime:2019-12-10 10:09:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-10 10:09:27 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0f809e3fc9bc661957c34398af918deae807615ce088b7cfd4017a9c14f1ee58}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.126: INFO: Pod "nginx-deployment-7b8c6f4498-tv5lj" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tv5lj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-7b8c6f4498-tv5lj,UID:d3e674d0-e829-4315-bce0-b65d0b381ca2,ResourceVersion:361510,Generation:0,CreationTimestamp:2019-12-10 10:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c5feac8f-e72f-4951-8333-df63c8202e3b 0xc002a5ad20 0xc002a5ad21}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce81,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a5ad90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a5adb0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 10 10:09:33.126: INFO: Pod "nginx-deployment-7b8c6f4498-xq965" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xq965,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5410,SelfLink:/api/v1/namespaces/deployment-5410/pods/nginx-deployment-7b8c6f4498-xq965,UID:a0b63e79-3034-4ea5-b6a6-21b73a59b7b7,ResourceVersion:361498,Generation:0,CreationTimestamp:2019-12-10 10:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c5feac8f-e72f-4951-8333-df63c8202e3b 0xc002a5ae20 0xc002a5ae21}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgmtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgmtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pgmtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce81,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a5ae90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a5aeb0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:09:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:09:33.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-5410" for this suite. +Dec 10 10:09:41.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:09:41.262: INFO: namespace deployment-5410 deletion completed in 8.124872598s + +• [SLOW TEST:16.392 seconds] +[sig-apps] Deployment +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + deployment should support proportional scaling [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-network] DNS + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:09:41.263: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-299 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-299.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-299.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-299.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-299.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-299.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-299.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe /etc/hosts +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Dec 10 10:09:45.461: INFO: DNS probes using dns-299/dns-test-44dcc89f-d66f-483b-8ceb-6bcab9e1d90c succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:09:45.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-299" for this suite. +Dec 10 10:09:51.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:09:51.561: INFO: namespace dns-299 deletion completed in 6.086960251s + +• [SLOW TEST:10.298 seconds] +[sig-network] DNS +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl label + should update the label on a resource [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:09:51.561: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-3808 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 +[BeforeEach] [k8s.io] Kubectl label + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1211 +STEP: creating the pod +Dec 10 10:09:51.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 create -f - --namespace=kubectl-3808' +Dec 10 10:09:52.124: INFO: stderr: "" +Dec 10 10:09:52.124: INFO: stdout: "pod/pause created\n" +Dec 10 10:09:52.124: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] +Dec 10 10:09:52.124: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3808" to be "running and ready" +Dec 10 10:09:52.203: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 78.449268ms +Dec 10 10:09:54.206: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.081106989s +Dec 10 10:09:54.206: INFO: Pod "pause" satisfied condition "running and ready" +Dec 10 10:09:54.206: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] +[It] should update the label on a resource [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: adding the label testing-label with value testing-label-value to a pod +Dec 10 10:09:54.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 label pods pause testing-label=testing-label-value --namespace=kubectl-3808' +Dec 10 10:09:54.294: INFO: stderr: "" +Dec 10 10:09:54.294: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod has the label testing-label with the value testing-label-value +Dec 10 10:09:54.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pod pause -L testing-label --namespace=kubectl-3808' +Dec 10 10:09:54.373: INFO: stderr: "" +Dec 10 10:09:54.373: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" +STEP: removing the label testing-label of a pod +Dec 10 10:09:54.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 label pods pause testing-label- --namespace=kubectl-3808' +Dec 10 10:09:54.458: INFO: stderr: "" +Dec 10 10:09:54.458: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod doesn't have the label testing-label +Dec 10 10:09:54.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pod pause -L testing-label --namespace=kubectl-3808' +Dec 10 10:09:54.546: INFO: stderr: "" +Dec 10 10:09:54.546: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s \n" +[AfterEach] [k8s.io] Kubectl label + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1218 +STEP: using delete to clean up resources +Dec 10 10:09:54.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 delete --grace-period=0 --force -f - --namespace=kubectl-3808' +Dec 10 10:09:54.632: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Dec 10 10:09:54.632: INFO: stdout: "pod \"pause\" force deleted\n" +Dec 10 10:09:54.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get rc,svc -l name=pause --no-headers --namespace=kubectl-3808' +Dec 10 10:09:54.740: INFO: stderr: "No resources found.\n" +Dec 10 10:09:54.740: INFO: stdout: "" +Dec 10 10:09:54.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods -l name=pause --namespace=kubectl-3808 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Dec 10 10:09:54.839: INFO: stderr: "" +Dec 10 10:09:54.839: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:09:54.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-3808" for this suite. +Dec 10 10:10:00.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:10:00.942: INFO: namespace kubectl-3808 deletion completed in 6.098402683s + +• [SLOW TEST:9.381 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl label + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 + should update the label on a resource [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-storage] Subpath + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:10:00.942: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-5856 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 +STEP: Setting up data +[It] should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating pod pod-subpath-test-projected-xzj8 +STEP: Creating a pod to test atomic-volume-subpath +Dec 10 10:10:01.099: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-xzj8" in namespace "subpath-5856" to be "success or failure" +Dec 10 10:10:01.101: INFO: Pod "pod-subpath-test-projected-xzj8": Phase="Pending", Reason="", readiness=false. Elapsed: 1.844342ms +Dec 10 10:10:03.104: INFO: Pod "pod-subpath-test-projected-xzj8": Phase="Running", Reason="", readiness=true. Elapsed: 2.004873374s +Dec 10 10:10:05.109: INFO: Pod "pod-subpath-test-projected-xzj8": Phase="Running", Reason="", readiness=true. Elapsed: 4.009817541s +Dec 10 10:10:07.113: INFO: Pod "pod-subpath-test-projected-xzj8": Phase="Running", Reason="", readiness=true. Elapsed: 6.013743861s +Dec 10 10:10:09.118: INFO: Pod "pod-subpath-test-projected-xzj8": Phase="Running", Reason="", readiness=true. Elapsed: 8.018391836s +Dec 10 10:10:11.122: INFO: Pod "pod-subpath-test-projected-xzj8": Phase="Running", Reason="", readiness=true. Elapsed: 10.023021799s +Dec 10 10:10:13.128: INFO: Pod "pod-subpath-test-projected-xzj8": Phase="Running", Reason="", readiness=true. Elapsed: 12.028230478s +Dec 10 10:10:15.133: INFO: Pod "pod-subpath-test-projected-xzj8": Phase="Running", Reason="", readiness=true. Elapsed: 14.033748131s +Dec 10 10:10:17.137: INFO: Pod "pod-subpath-test-projected-xzj8": Phase="Running", Reason="", readiness=true. Elapsed: 16.037966912s +Dec 10 10:10:19.141: INFO: Pod "pod-subpath-test-projected-xzj8": Phase="Running", Reason="", readiness=true. Elapsed: 18.041993065s +Dec 10 10:10:21.148: INFO: Pod "pod-subpath-test-projected-xzj8": Phase="Running", Reason="", readiness=true. Elapsed: 20.048585234s +Dec 10 10:10:23.151: INFO: Pod "pod-subpath-test-projected-xzj8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.051400762s +STEP: Saw pod success +Dec 10 10:10:23.151: INFO: Pod "pod-subpath-test-projected-xzj8" satisfied condition "success or failure" +Dec 10 10:10:23.154: INFO: Trying to get logs from node dce82 pod pod-subpath-test-projected-xzj8 container test-container-subpath-projected-xzj8: +STEP: delete the pod +Dec 10 10:10:23.198: INFO: Waiting for pod pod-subpath-test-projected-xzj8 to disappear +Dec 10 10:10:23.200: INFO: Pod pod-subpath-test-projected-xzj8 no longer exists +STEP: Deleting pod pod-subpath-test-projected-xzj8 +Dec 10 10:10:23.200: INFO: Deleting pod "pod-subpath-test-projected-xzj8" in namespace "subpath-5856" +[AfterEach] [sig-storage] Subpath + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:10:23.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-5856" for this suite. +Dec 10 10:10:29.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:10:29.289: INFO: namespace subpath-5856 deletion completed in 6.083879651s + +• [SLOW TEST:28.346 seconds] +[sig-storage] Subpath +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + Atomic writer volumes + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 + should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [k8s.io] Container Runtime + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:10:29.289: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-6856 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Dec 10 10:10:31.450: INFO: Expected: &{OK} to match Container's Termination Message: OK -- +STEP: delete the container +[AfterEach] [k8s.io] Container Runtime + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:10:31.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-6856" for this suite. +Dec 10 10:10:37.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:10:37.593: INFO: namespace container-runtime-6856 deletion completed in 6.127987712s + +• [SLOW TEST:8.304 seconds] +[k8s.io] Container Runtime +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 + blackbox test + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 + on terminated container + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 + should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for services [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-network] DNS + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:10:37.594: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-6256 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for services [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6256.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6256.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6256.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6256.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6256.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6256.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6256.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6256.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6256.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6256.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6256.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 52.3.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.3.52_udp@PTR;check="$$(dig +tcp +noall +answer +search 52.3.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.3.52_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6256.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6256.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6256.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6256.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6256.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6256.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6256.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6256.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6256.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6256.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6256.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 52.3.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.3.52_udp@PTR;check="$$(dig +tcp +noall +answer +search 52.3.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.3.52_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Dec 10 10:10:41.761: INFO: Unable to read wheezy_udp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:41.764: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:41.767: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:41.771: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:41.793: INFO: Unable to read jessie_udp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:41.796: INFO: Unable to read jessie_tcp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:41.798: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:41.801: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:41.812: INFO: Lookups using dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816 failed for: [wheezy_udp@dns-test-service.dns-6256.svc.cluster.local wheezy_tcp@dns-test-service.dns-6256.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local jessie_udp@dns-test-service.dns-6256.svc.cluster.local jessie_tcp@dns-test-service.dns-6256.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local] + +Dec 10 10:10:46.816: INFO: Unable to read wheezy_udp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:46.819: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:46.822: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:46.825: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:46.852: INFO: Unable to read jessie_udp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:46.855: INFO: Unable to read jessie_tcp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:46.857: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:46.860: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:46.881: INFO: Lookups using dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816 failed for: [wheezy_udp@dns-test-service.dns-6256.svc.cluster.local wheezy_tcp@dns-test-service.dns-6256.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local jessie_udp@dns-test-service.dns-6256.svc.cluster.local jessie_tcp@dns-test-service.dns-6256.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local] + +Dec 10 10:10:51.816: INFO: Unable to read wheezy_udp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:51.820: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:51.824: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:51.828: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:51.856: INFO: Unable to read jessie_udp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:51.858: INFO: Unable to read jessie_tcp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:51.862: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:51.865: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:51.887: INFO: Lookups using dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816 failed for: [wheezy_udp@dns-test-service.dns-6256.svc.cluster.local wheezy_tcp@dns-test-service.dns-6256.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local jessie_udp@dns-test-service.dns-6256.svc.cluster.local jessie_tcp@dns-test-service.dns-6256.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local] + +Dec 10 10:10:56.816: INFO: Unable to read wheezy_udp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:56.818: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:56.821: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:56.825: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:56.843: INFO: Unable to read jessie_udp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:56.845: INFO: Unable to read jessie_tcp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:56.847: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:56.849: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:10:56.865: INFO: Lookups using dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816 failed for: [wheezy_udp@dns-test-service.dns-6256.svc.cluster.local wheezy_tcp@dns-test-service.dns-6256.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local jessie_udp@dns-test-service.dns-6256.svc.cluster.local jessie_tcp@dns-test-service.dns-6256.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local] + +Dec 10 10:11:01.817: INFO: Unable to read wheezy_udp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:11:01.821: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:11:01.825: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:11:01.830: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:11:01.853: INFO: Unable to read jessie_udp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:11:01.857: INFO: Unable to read jessie_tcp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:11:01.859: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:11:01.862: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:11:01.884: INFO: Lookups using dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816 failed for: [wheezy_udp@dns-test-service.dns-6256.svc.cluster.local wheezy_tcp@dns-test-service.dns-6256.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local jessie_udp@dns-test-service.dns-6256.svc.cluster.local jessie_tcp@dns-test-service.dns-6256.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local] + +Dec 10 10:11:06.817: INFO: Unable to read wheezy_udp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:11:06.821: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:11:06.824: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:11:06.828: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:11:06.854: INFO: Unable to read jessie_udp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:11:06.857: INFO: Unable to read jessie_tcp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:11:06.861: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:11:06.864: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:11:06.884: INFO: Lookups using dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816 failed for: [wheezy_udp@dns-test-service.dns-6256.svc.cluster.local wheezy_tcp@dns-test-service.dns-6256.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local jessie_udp@dns-test-service.dns-6256.svc.cluster.local jessie_tcp@dns-test-service.dns-6256.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local] + +Dec 10 10:11:11.816: INFO: Unable to read wheezy_udp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:11:11.820: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:11:11.823: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:11:11.827: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local from pod dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816: the server could not find the requested resource (get pods dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816) +Dec 10 10:11:11.884: INFO: Lookups using dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816 failed for: [wheezy_udp@dns-test-service.dns-6256.svc.cluster.local wheezy_tcp@dns-test-service.dns-6256.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6256.svc.cluster.local] + +Dec 10 10:11:16.879: INFO: DNS probes using dns-6256/dns-test-f1c915aa-0a56-4d12-bad0-25c34c0c1816 succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:11:16.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-6256" for this suite. +Dec 10 10:11:22.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:11:23.003: INFO: namespace dns-6256 deletion completed in 6.091630725s + +• [SLOW TEST:45.410 seconds] +[sig-network] DNS +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should provide DNS for services [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to start watching from a specific resource version [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:11:23.003: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-5138 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to start watching from a specific resource version [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: modifying the configmap a second time +STEP: deleting the configmap +STEP: creating a watch on configmaps from the resource version returned by the first update +STEP: Expecting to observe notifications for all changes to the configmap after the first update +Dec 10 10:11:23.294: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-5138,SelfLink:/api/v1/namespaces/watch-5138/configmaps/e2e-watch-test-resource-version,UID:e8c55034-7607-4a04-a1b4-4e1017fbf218,ResourceVersion:362398,Generation:0,CreationTimestamp:2019-12-10 10:11:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Dec 10 10:11:23.294: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-5138,SelfLink:/api/v1/namespaces/watch-5138/configmaps/e2e-watch-test-resource-version,UID:e8c55034-7607-4a04-a1b4-4e1017fbf218,ResourceVersion:362399,Generation:0,CreationTimestamp:2019-12-10 10:11:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +[AfterEach] [sig-api-machinery] Watchers + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:11:23.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-5138" for this suite. +Dec 10 10:11:29.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:11:29.388: INFO: namespace watch-5138 deletion completed in 6.090911156s + +• [SLOW TEST:6.384 seconds] +[sig-api-machinery] Watchers +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to start watching from a specific resource version [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSS +------------------------------ +[k8s.io] Pods + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:11:29.388: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-3273 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 +[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +Dec 10 10:11:29.532: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: creating the pod +STEP: submitting the pod to kubernetes +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:11:31.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-3273" for this suite. +Dec 10 10:12:17.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:12:17.659: INFO: namespace pods-3273 deletion completed in 46.089770442s + +• [SLOW TEST:48.271 seconds] +[k8s.io] Pods +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SS +------------------------------ +[k8s.io] Docker Containers + should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [k8s.io] Docker Containers + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:12:17.659: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-5293 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating a pod to test override arguments +Dec 10 10:12:17.805: INFO: Waiting up to 5m0s for pod "client-containers-e93f7258-0ba3-4229-9d6d-49caa2b54daf" in namespace "containers-5293" to be "success or failure" +Dec 10 10:12:17.808: INFO: Pod "client-containers-e93f7258-0ba3-4229-9d6d-49caa2b54daf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.688995ms +Dec 10 10:12:19.810: INFO: Pod "client-containers-e93f7258-0ba3-4229-9d6d-49caa2b54daf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005330817s +Dec 10 10:12:21.813: INFO: Pod "client-containers-e93f7258-0ba3-4229-9d6d-49caa2b54daf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008111346s +STEP: Saw pod success +Dec 10 10:12:21.813: INFO: Pod "client-containers-e93f7258-0ba3-4229-9d6d-49caa2b54daf" satisfied condition "success or failure" +Dec 10 10:12:21.815: INFO: Trying to get logs from node dce82 pod client-containers-e93f7258-0ba3-4229-9d6d-49caa2b54daf container test-container: +STEP: delete the pod +Dec 10 10:12:21.829: INFO: Waiting for pod client-containers-e93f7258-0ba3-4229-9d6d-49caa2b54daf to disappear +Dec 10 10:12:21.832: INFO: Pod client-containers-e93f7258-0ba3-4229-9d6d-49caa2b54daf no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:12:21.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-5293" for this suite. +Dec 10 10:12:27.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:12:27.919: INFO: namespace containers-5293 deletion completed in 6.083161802s + +• [SLOW TEST:10.260 seconds] +[k8s.io] Docker Containers +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 + should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:12:27.919: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3604 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating the pod +Dec 10 10:12:30.601: INFO: Successfully updated pod "labelsupdatec8b80c45-3899-4c9e-85eb-afcad4bf2015" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:12:32.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3604" for this suite. +Dec 10 10:12:54.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:12:54.700: INFO: namespace projected-3604 deletion completed in 22.071770238s + +• [SLOW TEST:26.781 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSS +------------------------------ +[sig-apps] ReplicationController + should release no longer matching pods [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-apps] ReplicationController + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:12:54.700: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-6625 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should release no longer matching pods [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Given a ReplicationController is created +STEP: When the matched label of one of its pods change +Dec 10 10:12:54.852: INFO: Pod name pod-release: Found 0 pods out of 1 +Dec 10 10:12:59.856: INFO: Pod name pod-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicationController + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:13:00.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-6625" for this suite. +Dec 10 10:13:06.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:13:06.957: INFO: namespace replication-controller-6625 deletion completed in 6.082301805s + +• [SLOW TEST:12.256 seconds] +[sig-apps] ReplicationController +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should release no longer matching pods [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:13:06.957: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-4161 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating a pod to test emptydir 0777 on node default medium +Dec 10 10:13:07.107: INFO: Waiting up to 5m0s for pod "pod-02406cc5-3d2b-40fb-989f-0786d839b811" in namespace "emptydir-4161" to be "success or failure" +Dec 10 10:13:07.109: INFO: Pod "pod-02406cc5-3d2b-40fb-989f-0786d839b811": Phase="Pending", Reason="", readiness=false. Elapsed: 2.690365ms +Dec 10 10:13:09.112: INFO: Pod "pod-02406cc5-3d2b-40fb-989f-0786d839b811": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005470352s +STEP: Saw pod success +Dec 10 10:13:09.112: INFO: Pod "pod-02406cc5-3d2b-40fb-989f-0786d839b811" satisfied condition "success or failure" +Dec 10 10:13:09.115: INFO: Trying to get logs from node dce82 pod pod-02406cc5-3d2b-40fb-989f-0786d839b811 container test-container: +STEP: delete the pod +Dec 10 10:13:09.131: INFO: Waiting for pod pod-02406cc5-3d2b-40fb-989f-0786d839b811 to disappear +Dec 10 10:13:09.134: INFO: Pod pod-02406cc5-3d2b-40fb-989f-0786d839b811 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:13:09.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4161" for this suite. +Dec 10 10:13:15.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:13:15.217: INFO: namespace emptydir-4161 deletion completed in 6.078911328s + +• [SLOW TEST:8.261 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSS +------------------------------ +[k8s.io] Docker Containers + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [k8s.io] Docker Containers + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:13:15.217: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-7601 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating a pod to test override all +Dec 10 10:13:15.377: INFO: Waiting up to 5m0s for pod "client-containers-c5929903-99a6-4fa7-aba9-69f22d451393" in namespace "containers-7601" to be "success or failure" +Dec 10 10:13:15.380: INFO: Pod "client-containers-c5929903-99a6-4fa7-aba9-69f22d451393": Phase="Pending", Reason="", readiness=false. Elapsed: 3.223685ms +Dec 10 10:13:17.383: INFO: Pod "client-containers-c5929903-99a6-4fa7-aba9-69f22d451393": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006520216s +STEP: Saw pod success +Dec 10 10:13:17.383: INFO: Pod "client-containers-c5929903-99a6-4fa7-aba9-69f22d451393" satisfied condition "success or failure" +Dec 10 10:13:17.386: INFO: Trying to get logs from node dce82 pod client-containers-c5929903-99a6-4fa7-aba9-69f22d451393 container test-container: +STEP: delete the pod +Dec 10 10:13:17.400: INFO: Waiting for pod client-containers-c5929903-99a6-4fa7-aba9-69f22d451393 to disappear +Dec 10 10:13:17.401: INFO: Pod client-containers-c5929903-99a6-4fa7-aba9-69f22d451393 no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:13:17.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-7601" for this suite. +Dec 10 10:13:23.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:13:23.490: INFO: namespace containers-7601 deletion completed in 6.085424797s + +• [SLOW TEST:8.273 seconds] +[k8s.io] Docker Containers +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-network] Networking + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:13:23.490: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-7319 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Performing setup for networking test in namespace pod-network-test-7319 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Dec 10 10:13:23.628: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +STEP: Creating test pods +Dec 10 10:13:41.694: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.28.194.224 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7319 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 10 10:13:41.694: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +Dec 10 10:13:42.813: INFO: Found all expected endpoints: [netserver-0] +Dec 10 10:13:42.816: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.28.8.85 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7319 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 10 10:13:42.816: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +Dec 10 10:13:43.941: INFO: Found all expected endpoints: [netserver-1] +Dec 10 10:13:43.944: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.28.104.255 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7319 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 10 10:13:43.944: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +Dec 10 10:13:45.068: INFO: Found all expected endpoints: [netserver-2] +[AfterEach] [sig-network] Networking + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:13:45.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-7319" for this suite. +Dec 10 10:14:07.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:14:07.157: INFO: namespace pod-network-test-7319 deletion completed in 22.083643273s + +• [SLOW TEST:43.667 seconds] +[sig-network] Networking +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 + Granular Checks: Pods + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +S +------------------------------ +[sig-node] ConfigMap + should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-node] ConfigMap + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:14:07.157: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-3833 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating configMap configmap-3833/configmap-test-a79f925d-85ba-44ac-8e3d-ca128a3f76a2 +STEP: Creating a pod to test consume configMaps +Dec 10 10:14:07.308: INFO: Waiting up to 5m0s for pod "pod-configmaps-b1f68d18-01d2-4395-ace6-54add36b1b55" in namespace "configmap-3833" to be "success or failure" +Dec 10 10:14:07.309: INFO: Pod "pod-configmaps-b1f68d18-01d2-4395-ace6-54add36b1b55": Phase="Pending", Reason="", readiness=false. Elapsed: 1.430033ms +Dec 10 10:14:09.312: INFO: Pod "pod-configmaps-b1f68d18-01d2-4395-ace6-54add36b1b55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004260509s +Dec 10 10:14:11.317: INFO: Pod "pod-configmaps-b1f68d18-01d2-4395-ace6-54add36b1b55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008881254s +STEP: Saw pod success +Dec 10 10:14:11.317: INFO: Pod "pod-configmaps-b1f68d18-01d2-4395-ace6-54add36b1b55" satisfied condition "success or failure" +Dec 10 10:14:11.320: INFO: Trying to get logs from node dce82 pod pod-configmaps-b1f68d18-01d2-4395-ace6-54add36b1b55 container env-test: +STEP: delete the pod +Dec 10 10:14:11.340: INFO: Waiting for pod pod-configmaps-b1f68d18-01d2-4395-ace6-54add36b1b55 to disappear +Dec 10 10:14:11.342: INFO: Pod pod-configmaps-b1f68d18-01d2-4395-ace6-54add36b1b55 no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:14:11.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-3833" for this suite. +Dec 10 10:14:17.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:14:17.434: INFO: namespace configmap-3833 deletion completed in 6.087145948s + +• [SLOW TEST:10.277 seconds] +[sig-node] ConfigMap +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 + should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:14:17.435: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-8493 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating configMap with name configmap-test-volume-802af1f0-9149-4072-986a-3a828f28b8b2 +STEP: Creating a pod to test consume configMaps +Dec 10 10:14:17.587: INFO: Waiting up to 5m0s for pod "pod-configmaps-aac4b290-6f56-425c-880e-beeb9309a8c8" in namespace "configmap-8493" to be "success or failure" +Dec 10 10:14:17.589: INFO: Pod "pod-configmaps-aac4b290-6f56-425c-880e-beeb9309a8c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147477ms +Dec 10 10:14:19.592: INFO: Pod "pod-configmaps-aac4b290-6f56-425c-880e-beeb9309a8c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005164007s +STEP: Saw pod success +Dec 10 10:14:19.592: INFO: Pod "pod-configmaps-aac4b290-6f56-425c-880e-beeb9309a8c8" satisfied condition "success or failure" +Dec 10 10:14:19.594: INFO: Trying to get logs from node dce82 pod pod-configmaps-aac4b290-6f56-425c-880e-beeb9309a8c8 container configmap-volume-test: +STEP: delete the pod +Dec 10 10:14:19.608: INFO: Waiting for pod pod-configmaps-aac4b290-6f56-425c-880e-beeb9309a8c8 to disappear +Dec 10 10:14:19.610: INFO: Pod pod-configmaps-aac4b290-6f56-425c-880e-beeb9309a8c8 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:14:19.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-8493" for this suite. +Dec 10 10:14:25.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:14:25.704: INFO: namespace configmap-8493 deletion completed in 6.091847682s + +• [SLOW TEST:8.270 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSS +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:14:25.705: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-3868 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 +[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: creating the pod +Dec 10 10:14:25.847: INFO: PodSpec: initContainers in spec.initContainers +Dec 10 10:15:10.413: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-62cd72be-528b-4e95-985c-b1ddfeb5ff97", GenerateName:"", Namespace:"init-container-3868", SelfLink:"/api/v1/namespaces/init-container-3868/pods/pod-init-62cd72be-528b-4e95-985c-b1ddfeb5ff97", UID:"1e438481-5fed-4b80-9f27-a503f62c3c82", ResourceVersion:"363322", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63711569665, loc:(*time.Location)(0x7ec7a20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"847381843"}, Annotations:map[string]string{"kubernetes.io/psp":"dce-psp-allow-all"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-9568p", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0016e9a40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9568p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9568p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9568p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0028d7528), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"dce82", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002a39320), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0028d75b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0028d75d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0028d75d8), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63711569665, loc:(*time.Location)(0x7ec7a20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63711569665, loc:(*time.Location)(0x7ec7a20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63711569665, loc:(*time.Location)(0x7ec7a20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63711569665, loc:(*time.Location)(0x7ec7a20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.6.135.82", PodIP:"172.28.8.96", StartTime:(*v1.Time)(0xc003c80b80), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0029f3420)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002a4c000)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://20721e46dc09a930ff0bc2420bf348d66bb6c25ebbeadb1e801a55d822d7b297"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003c80be0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003c80ba0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:15:10.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-3868" for this suite. +Dec 10 10:15:32.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:15:32.509: INFO: namespace init-container-3868 deletion completed in 22.091664972s + +• [SLOW TEST:66.804 seconds] +[k8s.io] InitContainer [NodeConformance] +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:15:32.509: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-8505 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: creating a watch on configmaps with label A +STEP: creating a watch on configmaps with label B +STEP: creating a watch on configmaps with label A or B +STEP: creating a configmap with label A and ensuring the correct watchers observe the notification +Dec 10 10:15:32.665: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8505,SelfLink:/api/v1/namespaces/watch-8505/configmaps/e2e-watch-test-configmap-a,UID:44e018b7-d34f-470b-a695-1fc148c3ec70,ResourceVersion:363395,Generation:0,CreationTimestamp:2019-12-10 10:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +Dec 10 10:15:32.665: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8505,SelfLink:/api/v1/namespaces/watch-8505/configmaps/e2e-watch-test-configmap-a,UID:44e018b7-d34f-470b-a695-1fc148c3ec70,ResourceVersion:363395,Generation:0,CreationTimestamp:2019-12-10 10:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +STEP: modifying configmap A and ensuring the correct watchers observe the notification +Dec 10 10:15:42.673: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8505,SelfLink:/api/v1/namespaces/watch-8505/configmaps/e2e-watch-test-configmap-a,UID:44e018b7-d34f-470b-a695-1fc148c3ec70,ResourceVersion:363417,Generation:0,CreationTimestamp:2019-12-10 10:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +Dec 10 10:15:42.673: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8505,SelfLink:/api/v1/namespaces/watch-8505/configmaps/e2e-watch-test-configmap-a,UID:44e018b7-d34f-470b-a695-1fc148c3ec70,ResourceVersion:363417,Generation:0,CreationTimestamp:2019-12-10 10:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +STEP: modifying configmap A again and ensuring the correct watchers observe the notification +Dec 10 10:15:52.680: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8505,SelfLink:/api/v1/namespaces/watch-8505/configmaps/e2e-watch-test-configmap-a,UID:44e018b7-d34f-470b-a695-1fc148c3ec70,ResourceVersion:363437,Generation:0,CreationTimestamp:2019-12-10 10:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Dec 10 10:15:52.680: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8505,SelfLink:/api/v1/namespaces/watch-8505/configmaps/e2e-watch-test-configmap-a,UID:44e018b7-d34f-470b-a695-1fc148c3ec70,ResourceVersion:363437,Generation:0,CreationTimestamp:2019-12-10 10:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +STEP: deleting configmap A and ensuring the correct watchers observe the notification +Dec 10 10:16:02.688: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8505,SelfLink:/api/v1/namespaces/watch-8505/configmaps/e2e-watch-test-configmap-a,UID:44e018b7-d34f-470b-a695-1fc148c3ec70,ResourceVersion:363457,Generation:0,CreationTimestamp:2019-12-10 10:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Dec 10 10:16:02.689: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8505,SelfLink:/api/v1/namespaces/watch-8505/configmaps/e2e-watch-test-configmap-a,UID:44e018b7-d34f-470b-a695-1fc148c3ec70,ResourceVersion:363457,Generation:0,CreationTimestamp:2019-12-10 10:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +STEP: creating a configmap with label B and ensuring the correct watchers observe the notification +Dec 10 10:16:12.693: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8505,SelfLink:/api/v1/namespaces/watch-8505/configmaps/e2e-watch-test-configmap-b,UID:adca7dff-8318-46e1-a33a-cb3505119914,ResourceVersion:363479,Generation:0,CreationTimestamp:2019-12-10 10:16:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +Dec 10 10:16:12.693: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8505,SelfLink:/api/v1/namespaces/watch-8505/configmaps/e2e-watch-test-configmap-b,UID:adca7dff-8318-46e1-a33a-cb3505119914,ResourceVersion:363479,Generation:0,CreationTimestamp:2019-12-10 10:16:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +STEP: deleting configmap B and ensuring the correct watchers observe the notification +Dec 10 10:16:22.698: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8505,SelfLink:/api/v1/namespaces/watch-8505/configmaps/e2e-watch-test-configmap-b,UID:adca7dff-8318-46e1-a33a-cb3505119914,ResourceVersion:363500,Generation:0,CreationTimestamp:2019-12-10 10:16:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +Dec 10 10:16:22.698: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8505,SelfLink:/api/v1/namespaces/watch-8505/configmaps/e2e-watch-test-configmap-b,UID:adca7dff-8318-46e1-a33a-cb3505119914,ResourceVersion:363500,Generation:0,CreationTimestamp:2019-12-10 10:16:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +[AfterEach] [sig-api-machinery] Watchers + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:16:32.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-8505" for this suite. +Dec 10 10:16:38.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:16:38.797: INFO: namespace watch-8505 deletion completed in 6.094551342s + +• [SLOW TEST:66.288 seconds] +[sig-api-machinery] Watchers +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] [sig-node] PreStop + should call prestop when killing a pod [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [k8s.io] [sig-node] PreStop + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:16:38.798: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename prestop +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in prestop-5967 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] [sig-node] PreStop + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 +[It] should call prestop when killing a pod [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating server pod server in namespace prestop-5967 +STEP: Waiting for pods to come up. +STEP: Creating tester pod tester in namespace prestop-5967 +STEP: Deleting pre-stop pod +Dec 10 10:16:56.001: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true +} +STEP: Deleting the server pod +[AfterEach] [k8s.io] [sig-node] PreStop + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:16:56.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "prestop-5967" for this suite. +Dec 10 10:17:34.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:17:34.100: INFO: namespace prestop-5967 deletion completed in 38.087650851s + +• [SLOW TEST:55.302 seconds] +[k8s.io] [sig-node] PreStop +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 + should call prestop when killing a pod [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:17:34.100: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-7203 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 +STEP: create the container to handle the HTTPGet hook request. +[It] should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: create the pod with lifecycle hook +STEP: delete the pod with lifecycle hook +Dec 10 10:17:40.278: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Dec 10 10:17:40.282: INFO: Pod pod-with-prestop-http-hook still exists +Dec 10 10:17:42.283: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Dec 10 10:17:42.286: INFO: Pod pod-with-prestop-http-hook still exists +Dec 10 10:17:44.283: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Dec 10 10:17:44.287: INFO: Pod pod-with-prestop-http-hook still exists +Dec 10 10:17:46.283: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Dec 10 10:17:46.286: INFO: Pod pod-with-prestop-http-hook still exists +Dec 10 10:17:48.283: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Dec 10 10:17:48.287: INFO: Pod pod-with-prestop-http-hook still exists +Dec 10 10:17:50.283: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Dec 10 10:17:50.286: INFO: Pod pod-with-prestop-http-hook still exists +Dec 10 10:17:52.283: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Dec 10 10:17:52.286: INFO: Pod pod-with-prestop-http-hook no longer exists +STEP: check prestop hook +[AfterEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:17:52.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-7203" for this suite. +Dec 10 10:18:14.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:18:14.438: INFO: namespace container-lifecycle-hook-7203 deletion completed in 22.141530937s + +• [SLOW TEST:40.338 seconds] +[k8s.io] Container Lifecycle Hook +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 + when create a pod with lifecycle hook + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 + should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should invoke init containers on a RestartNever pod [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:18:14.438: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-9261 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 +[It] should invoke init containers on a RestartNever pod [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: creating the pod +Dec 10 10:18:14.584: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:18:18.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-9261" for this suite. +Dec 10 10:18:24.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:18:24.455: INFO: namespace init-container-9261 deletion completed in 6.087742157s + +• [SLOW TEST:10.017 seconds] +[k8s.io] InitContainer [NodeConformance] +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 + should invoke init containers on a RestartNever pod [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Guestbook application + should create and stop a working application [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:18:24.455: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-5320 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 +[It] should create and stop a working application [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: creating all guestbook components +Dec 10 10:18:24.602: INFO: apiVersion: v1 +kind: Service +metadata: + name: redis-slave + labels: + app: redis + role: slave + tier: backend +spec: + ports: + - port: 6379 + selector: + app: redis + role: slave + tier: backend + +Dec 10 10:18:24.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 create -f - --namespace=kubectl-5320' +Dec 10 10:18:24.794: INFO: stderr: "" +Dec 10 10:18:24.794: INFO: stdout: "service/redis-slave created\n" +Dec 10 10:18:24.794: INFO: apiVersion: v1 +kind: Service +metadata: + name: redis-master + labels: + app: redis + role: master + tier: backend +spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: redis + role: master + tier: backend + +Dec 10 10:18:24.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 create -f - --namespace=kubectl-5320' +Dec 10 10:18:24.947: INFO: stderr: "" +Dec 10 10:18:24.948: INFO: stdout: "service/redis-master created\n" +Dec 10 10:18:24.948: INFO: apiVersion: v1 +kind: Service +metadata: + name: frontend + labels: + app: guestbook + tier: frontend +spec: + # if your cluster supports it, uncomment the following to automatically create + # an external load-balanced IP for the frontend service. + # type: LoadBalancer + ports: + - port: 80 + selector: + app: guestbook + tier: frontend + +Dec 10 10:18:24.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 create -f - --namespace=kubectl-5320' +Dec 10 10:18:25.101: INFO: stderr: "" +Dec 10 10:18:25.101: INFO: stdout: "service/frontend created\n" +Dec 10 10:18:25.101: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: frontend +spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + tier: frontend + template: + metadata: + labels: + app: guestbook + tier: frontend + spec: + containers: + - name: php-redis + image: gcr.io/google-samples/gb-frontend:v6 + resources: + requests: + cpu: 100m + memory: 100Mi + env: + - name: GET_HOSTS_FROM + value: dns + # If your cluster config does not include a dns service, then to + # instead access environment variables to find service host + # info, comment out the 'value: dns' line above, and uncomment the + # line below: + # value: env + ports: + - containerPort: 80 + +Dec 10 10:18:25.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 create -f - --namespace=kubectl-5320' +Dec 10 10:18:25.256: INFO: stderr: "" +Dec 10 10:18:25.256: INFO: stdout: "deployment.apps/frontend created\n" +Dec 10 10:18:25.256: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: redis-master +spec: + replicas: 1 + selector: + matchLabels: + app: redis + role: master + tier: backend + template: + metadata: + labels: + app: redis + role: master + tier: backend + spec: + containers: + - name: master + image: gcr.io/kubernetes-e2e-test-images/redis:1.0 + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Dec 10 10:18:25.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 create -f - --namespace=kubectl-5320' +Dec 10 10:18:25.405: INFO: stderr: "" +Dec 10 10:18:25.405: INFO: stdout: "deployment.apps/redis-master created\n" +Dec 10 10:18:25.405: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: redis-slave +spec: + replicas: 2 + selector: + matchLabels: + app: redis + role: slave + tier: backend + template: + metadata: + labels: + app: redis + role: slave + tier: backend + spec: + containers: + - name: slave + image: gcr.io/google-samples/gb-redisslave:v3 + resources: + requests: + cpu: 100m + memory: 100Mi + env: + - name: GET_HOSTS_FROM + value: dns + # If your cluster config does not include a dns service, then to + # instead access an environment variable to find the master + # service's host, comment out the 'value: dns' line above, and + # uncomment the line below: + # value: env + ports: + - containerPort: 6379 + +Dec 10 10:18:25.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 create -f - --namespace=kubectl-5320' +Dec 10 10:18:25.557: INFO: stderr: "" +Dec 10 10:18:25.557: INFO: stdout: "deployment.apps/redis-slave created\n" +STEP: validating guestbook app +Dec 10 10:18:25.557: INFO: Waiting for all frontend pods to be Running. +Dec 10 10:18:30.607: INFO: Waiting for frontend to serve content. +Dec 10 10:18:35.637: INFO: Failed to get response from guestbook. err: , response:
+Fatal error: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection timed out [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:155 +Stack trace: +#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(128): Predis\Connection\AbstractConnection->onConnectionError('Connection time...', 110) +#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(178): Predis\Connection\StreamConnection->createStreamSocket(Object(Predis\Connection\Parameters), 'tcp://redis-sla...', 4) +#2 /usr/local/lib/php/Predis/Connection/StreamConnection.php(100): Predis\Connection\StreamConnection->tcpStreamInitializer(Object(Predis\Connection\Parameters)) +#3 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(81): Predis\Connection\StreamConnection->createResource() +#4 /usr/local/lib/php/Predis/Connection/StreamConnection.php(258): Predis\Connection\AbstractConnection->connect() +#5 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(180): Predis\Connection\Stre in /usr/local/lib/php/Predis/Connection/AbstractConnection.php on line 155
+ +Dec 10 10:18:40.660: INFO: Trying to add a new entry to the guestbook. +Dec 10 10:18:40.681: INFO: Verifying that added entry can be retrieved. +Dec 10 10:18:40.690: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Dec 10 10:18:45.702: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Dec 10 10:18:50.720: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Dec 10 10:18:55.741: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Dec 10 10:19:00.755: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Dec 10 10:19:05.766: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Dec 10 10:19:10.781: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Dec 10 10:19:15.798: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Dec 10 10:19:20.814: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Dec 10 10:19:25.837: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +STEP: using delete to clean up resources +Dec 10 10:19:30.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 delete --grace-period=0 --force -f - --namespace=kubectl-5320' +Dec 10 10:19:30.949: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Dec 10 10:19:30.949: INFO: stdout: "service \"redis-slave\" force deleted\n" +STEP: using delete to clean up resources +Dec 10 10:19:30.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 delete --grace-period=0 --force -f - --namespace=kubectl-5320' +Dec 10 10:19:31.033: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Dec 10 10:19:31.033: INFO: stdout: "service \"redis-master\" force deleted\n" +STEP: using delete to clean up resources +Dec 10 10:19:31.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 delete --grace-period=0 --force -f - --namespace=kubectl-5320' +Dec 10 10:19:31.113: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Dec 10 10:19:31.113: INFO: stdout: "service \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Dec 10 10:19:31.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 delete --grace-period=0 --force -f - --namespace=kubectl-5320' +Dec 10 10:19:31.197: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Dec 10 10:19:31.197: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Dec 10 10:19:31.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 delete --grace-period=0 --force -f - --namespace=kubectl-5320' +Dec 10 10:19:31.274: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Dec 10 10:19:31.274: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" +STEP: using delete to clean up resources +Dec 10 10:19:31.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 delete --grace-period=0 --force -f - --namespace=kubectl-5320' +Dec 10 10:19:31.357: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Dec 10 10:19:31.357: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:19:31.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5320" for this suite. +Dec 10 10:20:09.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:20:09.429: INFO: namespace kubectl-5320 deletion completed in 38.068706005s + +• [SLOW TEST:104.974 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Guestbook application + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 + should create and stop a working application [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +S +------------------------------ +[k8s.io] Probing container + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:20:09.430: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-8993 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 +[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating pod test-webserver-7a426166-ff17-4f86-b1ce-b22cffcbfff0 in namespace container-probe-8993 +Dec 10 10:20:11.589: INFO: Started pod test-webserver-7a426166-ff17-4f86-b1ce-b22cffcbfff0 in namespace container-probe-8993 +STEP: checking the pod's current state and verifying that restartCount is present +Dec 10 10:20:11.590: INFO: Initial restart count of pod test-webserver-7a426166-ff17-4f86-b1ce-b22cffcbfff0 is 0 +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:24:12.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-8993" for this suite. +Dec 10 10:24:18.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:24:18.148: INFO: namespace container-probe-8993 deletion completed in 6.080931967s + +• [SLOW TEST:248.719 seconds] +[k8s.io] Probing container +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-node] Downward API + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:24:18.148: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-3002 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating a pod to test downward api env vars +Dec 10 10:24:18.297: INFO: Waiting up to 5m0s for pod "downward-api-0c98479e-d808-4ab0-ac19-3ff70ae2d939" in namespace "downward-api-3002" to be "success or failure" +Dec 10 10:24:18.299: INFO: Pod "downward-api-0c98479e-d808-4ab0-ac19-3ff70ae2d939": Phase="Pending", Reason="", readiness=false. Elapsed: 2.270135ms +Dec 10 10:24:20.303: INFO: Pod "downward-api-0c98479e-d808-4ab0-ac19-3ff70ae2d939": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006446651s +STEP: Saw pod success +Dec 10 10:24:20.303: INFO: Pod "downward-api-0c98479e-d808-4ab0-ac19-3ff70ae2d939" satisfied condition "success or failure" +Dec 10 10:24:20.306: INFO: Trying to get logs from node dce82 pod downward-api-0c98479e-d808-4ab0-ac19-3ff70ae2d939 container dapi-container: +STEP: delete the pod +Dec 10 10:24:20.319: INFO: Waiting for pod downward-api-0c98479e-d808-4ab0-ac19-3ff70ae2d939 to disappear +Dec 10 10:24:20.322: INFO: Pod downward-api-0c98479e-d808-4ab0-ac19-3ff70ae2d939 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:24:20.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3002" for this suite. +Dec 10 10:24:26.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:24:26.403: INFO: namespace downward-api-3002 deletion completed in 6.078816727s + +• [SLOW TEST:8.255 seconds] +[sig-node] Downward API +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 + should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for the cluster [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-network] DNS + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:24:26.403: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-4512 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for the cluster [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4512.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4512.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Dec 10 10:24:30.596: INFO: DNS probes using dns-4512/dns-test-0b0ab69d-a490-4baa-b557-7e53951de121 succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:24:30.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-4512" for this suite. +Dec 10 10:24:36.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:24:36.688: INFO: namespace dns-4512 deletion completed in 6.083147437s + +• [SLOW TEST:10.285 seconds] +[sig-network] DNS +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should provide DNS for the cluster [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:24:36.688: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-1526 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 +STEP: Creating service test in namespace statefulset-1526 +[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Initializing watcher for selector baz=blah,foo=bar +STEP: Creating stateful set ss in namespace statefulset-1526 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1526 +Dec 10 10:24:36.853: INFO: Found 0 stateful pods, waiting for 1 +Dec 10 10:24:46.856: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod +Dec 10 10:24:46.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 exec --namespace=statefulset-1526 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Dec 10 10:24:47.127: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Dec 10 10:24:47.127: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Dec 10 10:24:47.127: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Dec 10 10:24:47.131: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Dec 10 10:24:57.136: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Dec 10 10:24:57.136: INFO: Waiting for statefulset status.replicas updated to 0 +Dec 10 10:24:57.149: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999573s +Dec 10 10:24:58.153: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996190531s +Dec 10 10:24:59.157: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.991700877s +Dec 10 10:25:00.162: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.987644273s +Dec 10 10:25:01.165: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.983061189s +Dec 10 10:25:02.168: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.979752931s +Dec 10 10:25:03.173: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.976593235s +Dec 10 10:25:04.177: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.971913061s +Dec 10 10:25:05.182: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.96749634s +Dec 10 10:25:06.186: INFO: Verifying statefulset ss doesn't scale past 1 for another 963.085803ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1526 +Dec 10 10:25:07.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 exec --namespace=statefulset-1526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 10 10:25:07.398: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" +Dec 10 10:25:07.398: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Dec 10 10:25:07.398: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Dec 10 10:25:07.400: INFO: Found 1 stateful pods, waiting for 3 +Dec 10 10:25:17.405: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Dec 10 10:25:17.406: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Dec 10 10:25:17.406: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Verifying that stateful set ss was scaled up in order +STEP: Scale down will halt with unhealthy stateful pod +Dec 10 10:25:17.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 exec --namespace=statefulset-1526 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Dec 10 10:25:17.630: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Dec 10 10:25:17.630: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Dec 10 10:25:17.630: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Dec 10 10:25:17.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 exec --namespace=statefulset-1526 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Dec 10 10:25:17.857: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Dec 10 10:25:17.857: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Dec 10 10:25:17.857: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Dec 10 10:25:17.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 exec --namespace=statefulset-1526 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Dec 10 10:25:18.077: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Dec 10 10:25:18.077: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Dec 10 10:25:18.077: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Dec 10 10:25:18.077: INFO: Waiting for statefulset status.replicas updated to 0 +Dec 10 10:25:18.081: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 +Dec 10 10:25:28.087: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Dec 10 10:25:28.087: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Dec 10 10:25:28.087: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Dec 10 10:25:28.096: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999372s +Dec 10 10:25:29.099: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996003062s +Dec 10 10:25:30.103: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992803213s +Dec 10 10:25:31.107: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.989120402s +Dec 10 10:25:32.115: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.984643838s +Dec 10 10:25:33.118: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.977513817s +Dec 10 10:25:34.123: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.973607185s +Dec 10 10:25:35.129: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.968399796s +Dec 10 10:25:36.134: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.963112532s +Dec 10 10:25:37.138: INFO: Verifying statefulset ss doesn't scale past 3 for another 957.749087ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1526 +Dec 10 10:25:38.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 exec --namespace=statefulset-1526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 10 10:25:38.354: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" +Dec 10 10:25:38.354: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Dec 10 10:25:38.354: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Dec 10 10:25:38.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 exec --namespace=statefulset-1526 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 10 10:25:38.589: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" +Dec 10 10:25:38.589: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Dec 10 10:25:38.589: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Dec 10 10:25:38.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 exec --namespace=statefulset-1526 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 10 10:25:38.812: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" +Dec 10 10:25:38.812: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Dec 10 10:25:38.812: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Dec 10 10:25:38.812: INFO: Scaling statefulset ss to 0 +STEP: Verifying that stateful set ss was scaled down in reverse order +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 +Dec 10 10:25:58.838: INFO: Deleting all statefulset in ns statefulset-1526 +Dec 10 10:25:58.841: INFO: Scaling statefulset ss to 0 +Dec 10 10:25:58.849: INFO: Waiting for statefulset status.replicas updated to 0 +Dec 10 10:25:58.851: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:25:58.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-1526" for this suite. +Dec 10 10:26:04.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:26:04.957: INFO: namespace statefulset-1526 deletion completed in 6.089752907s + +• [SLOW TEST:88.269 seconds] +[sig-apps] StatefulSet +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:26:04.957: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-6573 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 +[It] should invoke init containers on a RestartAlways pod [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: creating the pod +Dec 10 10:26:05.099: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:26:09.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-6573" for this suite. +Dec 10 10:26:31.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:26:31.163: INFO: namespace init-container-6573 deletion completed in 22.097487802s + +• [SLOW TEST:26.206 seconds] +[k8s.io] InitContainer [NodeConformance] +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job + should create a job from an image, then delete the job [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:26:31.163: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7682 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 +[It] should create a job from an image, then delete the job [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: executing a command with run --rm and attach with stdin +Dec 10 10:26:31.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 --namespace=kubectl-7682 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' +Dec 10 10:26:33.586: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n" +Dec 10 10:26:33.586: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" +STEP: verifying the job e2e-test-rm-busybox-job was deleted +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:26:35.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7682" for this suite. +Dec 10 10:26:43.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:26:43.670: INFO: namespace kubectl-7682 deletion completed in 8.076129351s + +• [SLOW TEST:12.507 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl run --rm job + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 + should create a job from an image, then delete the job [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +S +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:26:43.670: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-8692 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +STEP: Creating configMap with name configmap-test-volume-map-d7155456-ed49-4ce3-be6f-6a84a9a43306 +STEP: Creating a pod to test consume configMaps +Dec 10 10:26:43.827: INFO: Waiting up to 5m0s for pod "pod-configmaps-a410ba7b-0b02-475e-b0d7-8e41576b9c92" in namespace "configmap-8692" to be "success or failure" +Dec 10 10:26:43.831: INFO: Pod "pod-configmaps-a410ba7b-0b02-475e-b0d7-8e41576b9c92": Phase="Pending", Reason="", readiness=false. Elapsed: 3.768228ms +Dec 10 10:26:45.834: INFO: Pod "pod-configmaps-a410ba7b-0b02-475e-b0d7-8e41576b9c92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006632122s +STEP: Saw pod success +Dec 10 10:26:45.834: INFO: Pod "pod-configmaps-a410ba7b-0b02-475e-b0d7-8e41576b9c92" satisfied condition "success or failure" +Dec 10 10:26:45.836: INFO: Trying to get logs from node dce82 pod pod-configmaps-a410ba7b-0b02-475e-b0d7-8e41576b9c92 container configmap-volume-test: +STEP: delete the pod +Dec 10 10:26:45.858: INFO: Waiting for pod pod-configmaps-a410ba7b-0b02-475e-b0d7-8e41576b9c92 to disappear +Dec 10 10:26:45.859: INFO: Pod pod-configmaps-a410ba7b-0b02-475e-b0d7-8e41576b9c92 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +Dec 10 10:26:45.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-8692" for this suite. +Dec 10 10:26:51.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 10 10:26:51.939: INFO: namespace configmap-8692 deletion completed in 6.075744133s + +• [SLOW TEST:8.269 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + should proxy logs on node using proxy subresource [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +[BeforeEach] version v1 + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +STEP: Creating a kubernetes client +Dec 10 10:26:51.939: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613 +STEP: Building a namespace api object, basename proxy +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-8369 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should proxy logs on node using proxy subresource [Conformance] + /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 +Dec 10 10:26:52.094: INFO: (0) /api/v1/nodes/dce81/proxy/logs/:
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7903
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating projection with secret that has name projected-secret-test-map-ff348334-1ef5-4cc4-8723-d925794825cf
+STEP: Creating a pod to test consume secrets
+Dec 10 10:26:58.417: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5cd8d36f-45a6-4ea4-b66e-7b7fba253e5a" in namespace "projected-7903" to be "success or failure"
+Dec 10 10:26:58.420: INFO: Pod "pod-projected-secrets-5cd8d36f-45a6-4ea4-b66e-7b7fba253e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.297419ms
+Dec 10 10:27:00.424: INFO: Pod "pod-projected-secrets-5cd8d36f-45a6-4ea4-b66e-7b7fba253e5a": Phase="Running", Reason="", readiness=true. Elapsed: 2.006575986s
+Dec 10 10:27:02.429: INFO: Pod "pod-projected-secrets-5cd8d36f-45a6-4ea4-b66e-7b7fba253e5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011713819s
+STEP: Saw pod success
+Dec 10 10:27:02.429: INFO: Pod "pod-projected-secrets-5cd8d36f-45a6-4ea4-b66e-7b7fba253e5a" satisfied condition "success or failure"
+Dec 10 10:27:02.433: INFO: Trying to get logs from node dce82 pod pod-projected-secrets-5cd8d36f-45a6-4ea4-b66e-7b7fba253e5a container projected-secret-volume-test: 
+STEP: delete the pod
+Dec 10 10:27:02.451: INFO: Waiting for pod pod-projected-secrets-5cd8d36f-45a6-4ea4-b66e-7b7fba253e5a to disappear
+Dec 10 10:27:02.453: INFO: Pod pod-projected-secrets-5cd8d36f-45a6-4ea4-b66e-7b7fba253e5a no longer exists
+[AfterEach] [sig-storage] Projected secret
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:27:02.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-7903" for this suite.
+Dec 10 10:27:08.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:27:08.593: INFO: namespace projected-7903 deletion completed in 6.136459313s
+
+• [SLOW TEST:10.347 seconds]
+[sig-storage] Projected secret
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
+  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSS
+------------------------------
+[sig-apps] ReplicationController 
+  should surface a failure condition on a common issue like exceeded quota [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-apps] ReplicationController
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:27:08.594: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename replication-controller
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-3890
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+Dec 10 10:27:08.745: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
+STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
+STEP: Checking rc "condition-test" has the desired failure condition set
+STEP: Scaling down rc "condition-test" to satisfy pod quota
+Dec 10 10:27:10.803: INFO: Updating replication controller "condition-test"
+STEP: Checking rc "condition-test" has no failure condition set
+[AfterEach] [sig-apps] ReplicationController
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:27:10.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "replication-controller-3890" for this suite.
+Dec 10 10:27:16.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:27:16.899: INFO: namespace replication-controller-3890 deletion completed in 6.090269136s
+
+• [SLOW TEST:8.305 seconds]
+[sig-apps] ReplicationController
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should surface a failure condition on a common issue like exceeded quota [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+[sig-storage] Projected secret 
+  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Projected secret
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:27:16.899: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2482
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating secret with name projected-secret-test-1efbd4d9-f6d3-4027-9931-cfccea2f109d
+STEP: Creating a pod to test consume secrets
+Dec 10 10:27:17.051: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bd40371c-c2cb-41dd-9afa-b0c28a074a4b" in namespace "projected-2482" to be "success or failure"
+Dec 10 10:27:17.055: INFO: Pod "pod-projected-secrets-bd40371c-c2cb-41dd-9afa-b0c28a074a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.566532ms
+Dec 10 10:27:19.059: INFO: Pod "pod-projected-secrets-bd40371c-c2cb-41dd-9afa-b0c28a074a4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007919563s
+STEP: Saw pod success
+Dec 10 10:27:19.059: INFO: Pod "pod-projected-secrets-bd40371c-c2cb-41dd-9afa-b0c28a074a4b" satisfied condition "success or failure"
+Dec 10 10:27:19.061: INFO: Trying to get logs from node dce82 pod pod-projected-secrets-bd40371c-c2cb-41dd-9afa-b0c28a074a4b container secret-volume-test: 
+STEP: delete the pod
+Dec 10 10:27:19.078: INFO: Waiting for pod pod-projected-secrets-bd40371c-c2cb-41dd-9afa-b0c28a074a4b to disappear
+Dec 10 10:27:19.084: INFO: Pod pod-projected-secrets-bd40371c-c2cb-41dd-9afa-b0c28a074a4b no longer exists
+[AfterEach] [sig-storage] Projected secret
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:27:19.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-2482" for this suite.
+Dec 10 10:27:25.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:27:25.176: INFO: namespace projected-2482 deletion completed in 6.088919692s
+
+• [SLOW TEST:8.278 seconds]
+[sig-storage] Projected secret
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
+  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSS
+------------------------------
+[sig-storage] Secrets 
+  optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:27:25.177: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename secrets
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-6868
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating secret with name s-test-opt-del-178c7ced-16c6-472e-a067-3e94ef1f09ed
+STEP: Creating secret with name s-test-opt-upd-af43fbdd-b43c-46cc-90f9-7aa4d05d7381
+STEP: Creating the pod
+STEP: Deleting secret s-test-opt-del-178c7ced-16c6-472e-a067-3e94ef1f09ed
+STEP: Updating secret s-test-opt-upd-af43fbdd-b43c-46cc-90f9-7aa4d05d7381
+STEP: Creating secret with name s-test-opt-create-5b694104-8d14-4026-920c-b520d5058445
+STEP: waiting to observe update in volume
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:28:35.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-6868" for this suite.
+Dec 10 10:28:57.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:28:57.867: INFO: namespace secrets-6868 deletion completed in 22.087313827s
+
+• [SLOW TEST:92.690 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
+  optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] EmptyDir wrapper volumes 
+  should not cause race condition when used for configmaps [Serial] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] EmptyDir wrapper volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:28:57.867: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename emptydir-wrapper
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-8812
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should not cause race condition when used for configmaps [Serial] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating 50 configmaps
+STEP: Creating RC which spawns configmap-volume pods
+Dec 10 10:28:58.223: INFO: Pod name wrapped-volume-race-78efdbb0-6b33-446d-88a6-932d03d703eb: Found 5 pods out of 5
+STEP: Ensuring each pod is running
+STEP: deleting ReplicationController wrapped-volume-race-78efdbb0-6b33-446d-88a6-932d03d703eb in namespace emptydir-wrapper-8812, will wait for the garbage collector to delete the pods
+Dec 10 10:29:16.348: INFO: Deleting ReplicationController wrapped-volume-race-78efdbb0-6b33-446d-88a6-932d03d703eb took: 8.740363ms
+Dec 10 10:29:16.749: INFO: Terminating ReplicationController wrapped-volume-race-78efdbb0-6b33-446d-88a6-932d03d703eb pods took: 400.195786ms
+STEP: Creating RC which spawns configmap-volume pods
+Dec 10 10:30:01.262: INFO: Pod name wrapped-volume-race-0eca62cb-e3c6-40d3-abcb-c2e6e83f01ca: Found 0 pods out of 5
+Dec 10 10:30:06.268: INFO: Pod name wrapped-volume-race-0eca62cb-e3c6-40d3-abcb-c2e6e83f01ca: Found 5 pods out of 5
+STEP: Ensuring each pod is running
+STEP: deleting ReplicationController wrapped-volume-race-0eca62cb-e3c6-40d3-abcb-c2e6e83f01ca in namespace emptydir-wrapper-8812, will wait for the garbage collector to delete the pods
+Dec 10 10:30:18.347: INFO: Deleting ReplicationController wrapped-volume-race-0eca62cb-e3c6-40d3-abcb-c2e6e83f01ca took: 9.153452ms
+Dec 10 10:30:18.748: INFO: Terminating ReplicationController wrapped-volume-race-0eca62cb-e3c6-40d3-abcb-c2e6e83f01ca pods took: 400.295436ms
+STEP: Creating RC which spawns configmap-volume pods
+Dec 10 10:31:01.914: INFO: Pod name wrapped-volume-race-11141673-5ef6-47ed-af60-890b81bf03d7: Found 0 pods out of 5
+Dec 10 10:31:06.922: INFO: Pod name wrapped-volume-race-11141673-5ef6-47ed-af60-890b81bf03d7: Found 5 pods out of 5
+STEP: Ensuring each pod is running
+STEP: deleting ReplicationController wrapped-volume-race-11141673-5ef6-47ed-af60-890b81bf03d7 in namespace emptydir-wrapper-8812, will wait for the garbage collector to delete the pods
+Dec 10 10:31:19.006: INFO: Deleting ReplicationController wrapped-volume-race-11141673-5ef6-47ed-af60-890b81bf03d7 took: 6.948623ms
+Dec 10 10:31:19.406: INFO: Terminating ReplicationController wrapped-volume-race-11141673-5ef6-47ed-af60-890b81bf03d7 pods took: 400.30183ms
+STEP: Cleaning up the configMaps
+[AfterEach] [sig-storage] EmptyDir wrapper volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:32:01.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-wrapper-8812" for this suite.
+Dec 10 10:32:07.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:32:07.117: INFO: namespace emptydir-wrapper-8812 deletion completed in 6.085618798s
+
+• [SLOW TEST:189.250 seconds]
+[sig-storage] EmptyDir wrapper volumes
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
+  should not cause race condition when used for configmaps [Serial] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide container's cpu limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:32:07.117: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-951
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should provide container's cpu limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test downward API volume plugin
+Dec 10 10:32:07.274: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d1b3800-17a2-4b82-9f41-a7c08e0d8558" in namespace "projected-951" to be "success or failure"
+Dec 10 10:32:07.278: INFO: Pod "downwardapi-volume-9d1b3800-17a2-4b82-9f41-a7c08e0d8558": Phase="Pending", Reason="", readiness=false. Elapsed: 3.974004ms
+Dec 10 10:32:09.282: INFO: Pod "downwardapi-volume-9d1b3800-17a2-4b82-9f41-a7c08e0d8558": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007940241s
+STEP: Saw pod success
+Dec 10 10:32:09.282: INFO: Pod "downwardapi-volume-9d1b3800-17a2-4b82-9f41-a7c08e0d8558" satisfied condition "success or failure"
+Dec 10 10:32:09.284: INFO: Trying to get logs from node dce82 pod downwardapi-volume-9d1b3800-17a2-4b82-9f41-a7c08e0d8558 container client-container: 
+STEP: delete the pod
+Dec 10 10:32:09.299: INFO: Waiting for pod downwardapi-volume-9d1b3800-17a2-4b82-9f41-a7c08e0d8558 to disappear
+Dec 10 10:32:09.301: INFO: Pod downwardapi-volume-9d1b3800-17a2-4b82-9f41-a7c08e0d8558 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:32:09.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-951" for this suite.
+Dec 10 10:32:15.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:32:15.399: INFO: namespace projected-951 deletion completed in 6.094065579s
+
+• [SLOW TEST:8.282 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should provide container's cpu limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
+  should create an rc from an image  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:32:15.399: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename kubectl
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-290
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
+[BeforeEach] [k8s.io] Kubectl run rc
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1457
+[It] should create an rc from an image  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: running the image docker.io/library/nginx:1.14-alpine
+Dec 10 10:32:15.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-290'
+Dec 10 10:32:15.645: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
+Dec 10 10:32:15.645: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
+STEP: verifying the rc e2e-test-nginx-rc was created
+STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
+STEP: confirm that you can get logs from an rc
+Dec 10 10:32:15.649: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-qsdsj]
+Dec 10 10:32:15.649: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-qsdsj" in namespace "kubectl-290" to be "running and ready"
+Dec 10 10:32:15.652: INFO: Pod "e2e-test-nginx-rc-qsdsj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.460025ms
+Dec 10 10:32:17.657: INFO: Pod "e2e-test-nginx-rc-qsdsj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008196676s
+Dec 10 10:32:19.662: INFO: Pod "e2e-test-nginx-rc-qsdsj": Phase="Running", Reason="", readiness=true. Elapsed: 4.012691961s
+Dec 10 10:32:19.662: INFO: Pod "e2e-test-nginx-rc-qsdsj" satisfied condition "running and ready"
+Dec 10 10:32:19.662: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-qsdsj]
+Dec 10 10:32:19.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 logs rc/e2e-test-nginx-rc --namespace=kubectl-290'
+Dec 10 10:32:19.770: INFO: stderr: ""
+Dec 10 10:32:19.770: INFO: stdout: ""
+[AfterEach] [k8s.io] Kubectl run rc
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1462
+Dec 10 10:32:19.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 delete rc e2e-test-nginx-rc --namespace=kubectl-290'
+Dec 10 10:32:19.858: INFO: stderr: ""
+Dec 10 10:32:19.858: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:32:19.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-290" for this suite.
+Dec 10 10:32:41.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:32:41.956: INFO: namespace kubectl-290 deletion completed in 22.09380406s
+
+• [SLOW TEST:26.556 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Kubectl run rc
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+    should create an rc from an image  [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSS
+------------------------------
+[k8s.io] Probing container 
+  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:32:41.956: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename container-probe
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-9260
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
+[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating pod busybox-d04289ab-a6e1-4b99-8164-902689bf0d45 in namespace container-probe-9260
+Dec 10 10:32:46.157: INFO: Started pod busybox-d04289ab-a6e1-4b99-8164-902689bf0d45 in namespace container-probe-9260
+STEP: checking the pod's current state and verifying that restartCount is present
+Dec 10 10:32:46.159: INFO: Initial restart count of pod busybox-d04289ab-a6e1-4b99-8164-902689bf0d45 is 0
+Dec 10 10:33:36.288: INFO: Restart count of pod container-probe-9260/busybox-d04289ab-a6e1-4b99-8164-902689bf0d45 is now 1 (50.129366671s elapsed)
+STEP: deleting the pod
+[AfterEach] [k8s.io] Probing container
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:33:36.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-probe-9260" for this suite.
+Dec 10 10:33:42.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:33:42.379: INFO: namespace container-probe-9260 deletion completed in 6.080337509s
+
+• [SLOW TEST:60.423 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Namespaces [Serial] 
+  should ensure that all pods are removed when a namespace is deleted [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-api-machinery] Namespaces [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:33:42.379: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename namespaces
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-3515
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a test namespace
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-3821
+STEP: Waiting for a default service account to be provisioned in namespace
+STEP: Creating a pod in the namespace
+STEP: Waiting for the pod to have running status
+STEP: Deleting the namespace
+STEP: Waiting for the namespace to be removed.
+STEP: Recreating the namespace
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-5781
+STEP: Verifying there are no pods in the namespace
+[AfterEach] [sig-api-machinery] Namespaces [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:34:06.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "namespaces-3515" for this suite.
+Dec 10 10:34:12.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:34:12.904: INFO: namespace namespaces-3515 deletion completed in 6.090117619s
+STEP: Destroying namespace "nsdeletetest-3821" for this suite.
+Dec 10 10:34:12.905: INFO: Namespace nsdeletetest-3821 was already deleted
+STEP: Destroying namespace "nsdeletetest-5781" for this suite.
+Dec 10 10:34:18.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:34:18.973: INFO: namespace nsdeletetest-5781 deletion completed in 6.068214953s
+
+• [SLOW TEST:36.594 seconds]
+[sig-api-machinery] Namespaces [Serial]
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should ensure that all pods are removed when a namespace is deleted [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:34:18.974: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename downward-api
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-3740
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
+[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test downward API volume plugin
+Dec 10 10:34:19.114: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eb9d0ec9-0471-437b-ae11-0abeb272f771" in namespace "downward-api-3740" to be "success or failure"
+Dec 10 10:34:19.116: INFO: Pod "downwardapi-volume-eb9d0ec9-0471-437b-ae11-0abeb272f771": Phase="Pending", Reason="", readiness=false. Elapsed: 2.554747ms
+Dec 10 10:34:21.120: INFO: Pod "downwardapi-volume-eb9d0ec9-0471-437b-ae11-0abeb272f771": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006803701s
+STEP: Saw pod success
+Dec 10 10:34:21.120: INFO: Pod "downwardapi-volume-eb9d0ec9-0471-437b-ae11-0abeb272f771" satisfied condition "success or failure"
+Dec 10 10:34:21.124: INFO: Trying to get logs from node dce82 pod downwardapi-volume-eb9d0ec9-0471-437b-ae11-0abeb272f771 container client-container: 
+STEP: delete the pod
+Dec 10 10:34:21.142: INFO: Waiting for pod downwardapi-volume-eb9d0ec9-0471-437b-ae11-0abeb272f771 to disappear
+Dec 10 10:34:21.144: INFO: Pod downwardapi-volume-eb9d0ec9-0471-437b-ae11-0abeb272f771 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:34:21.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-3740" for this suite.
+Dec 10 10:34:27.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:34:27.218: INFO: namespace downward-api-3740 deletion completed in 6.071139074s
+
+• [SLOW TEST:8.244 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
+  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:34:27.219: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename emptydir
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9422
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test emptydir 0777 on tmpfs
+Dec 10 10:34:27.365: INFO: Waiting up to 5m0s for pod "pod-def60ea6-e4d5-49e7-b7ca-8aeba826e36f" in namespace "emptydir-9422" to be "success or failure"
+Dec 10 10:34:27.368: INFO: Pod "pod-def60ea6-e4d5-49e7-b7ca-8aeba826e36f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.410406ms
+Dec 10 10:34:29.371: INFO: Pod "pod-def60ea6-e4d5-49e7-b7ca-8aeba826e36f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005304192s
+STEP: Saw pod success
+Dec 10 10:34:29.371: INFO: Pod "pod-def60ea6-e4d5-49e7-b7ca-8aeba826e36f" satisfied condition "success or failure"
+Dec 10 10:34:29.372: INFO: Trying to get logs from node dce82 pod pod-def60ea6-e4d5-49e7-b7ca-8aeba826e36f container test-container: 
+STEP: delete the pod
+Dec 10 10:34:29.385: INFO: Waiting for pod pod-def60ea6-e4d5-49e7-b7ca-8aeba826e36f to disappear
+Dec 10 10:34:29.387: INFO: Pod pod-def60ea6-e4d5-49e7-b7ca-8aeba826e36f no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:34:29.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-9422" for this suite.
+Dec 10 10:34:35.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:34:35.466: INFO: namespace emptydir-9422 deletion completed in 6.076241913s
+
+• [SLOW TEST:8.247 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSS
+------------------------------
+[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
+  should execute poststart http hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:34:35.466: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename container-lifecycle-hook
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-3322
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] when create a pod with lifecycle hook
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
+STEP: create the container to handle the HTTPGet hook request.
+[It] should execute poststart http hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: create the pod with lifecycle hook
+STEP: check poststart hook
+STEP: delete the pod with lifecycle hook
+Dec 10 10:34:39.650: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
+Dec 10 10:34:39.652: INFO: Pod pod-with-poststart-http-hook still exists
+Dec 10 10:34:41.653: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
+Dec 10 10:34:41.656: INFO: Pod pod-with-poststart-http-hook still exists
+Dec 10 10:34:43.653: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
+Dec 10 10:34:43.655: INFO: Pod pod-with-poststart-http-hook still exists
+Dec 10 10:34:45.653: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
+Dec 10 10:34:45.656: INFO: Pod pod-with-poststart-http-hook still exists
+Dec 10 10:34:47.653: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
+Dec 10 10:34:47.655: INFO: Pod pod-with-poststart-http-hook still exists
+Dec 10 10:34:49.653: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
+Dec 10 10:34:49.656: INFO: Pod pod-with-poststart-http-hook still exists
+Dec 10 10:34:51.653: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
+Dec 10 10:34:51.655: INFO: Pod pod-with-poststart-http-hook still exists
+Dec 10 10:34:53.653: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
+Dec 10 10:34:53.655: INFO: Pod pod-with-poststart-http-hook no longer exists
+[AfterEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:34:53.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-lifecycle-hook-3322" for this suite.
+Dec 10 10:35:15.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:35:15.741: INFO: namespace container-lifecycle-hook-3322 deletion completed in 22.0825656s
+
+• [SLOW TEST:40.275 seconds]
+[k8s.io] Container Lifecycle Hook
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  when create a pod with lifecycle hook
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
+    should execute poststart http hook properly [NodeConformance] [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:35:15.741: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename emptydir
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-3482
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test emptydir 0644 on node default medium
+Dec 10 10:35:15.882: INFO: Waiting up to 5m0s for pod "pod-86afdb2a-206a-47c3-8103-b965190d6c46" in namespace "emptydir-3482" to be "success or failure"
+Dec 10 10:35:15.885: INFO: Pod "pod-86afdb2a-206a-47c3-8103-b965190d6c46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.88662ms
+Dec 10 10:35:17.889: INFO: Pod "pod-86afdb2a-206a-47c3-8103-b965190d6c46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006666251s
+Dec 10 10:35:19.891: INFO: Pod "pod-86afdb2a-206a-47c3-8103-b965190d6c46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009473762s
+STEP: Saw pod success
+Dec 10 10:35:19.891: INFO: Pod "pod-86afdb2a-206a-47c3-8103-b965190d6c46" satisfied condition "success or failure"
+Dec 10 10:35:19.893: INFO: Trying to get logs from node dce82 pod pod-86afdb2a-206a-47c3-8103-b965190d6c46 container test-container: 
+STEP: delete the pod
+Dec 10 10:35:19.904: INFO: Waiting for pod pod-86afdb2a-206a-47c3-8103-b965190d6c46 to disappear
+Dec 10 10:35:19.906: INFO: Pod pod-86afdb2a-206a-47c3-8103-b965190d6c46 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:35:19.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-3482" for this suite.
+Dec 10 10:35:25.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:35:25.988: INFO: namespace emptydir-3482 deletion completed in 6.079601999s
+
+• [SLOW TEST:10.247 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSS
+------------------------------
+[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
+  should be possible to delete [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:35:25.988: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename kubelet-test
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-7176
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
+[BeforeEach] when scheduling a busybox command that always fails in a pod
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
+[It] should be possible to delete [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[AfterEach] [k8s.io] Kubelet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:35:26.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubelet-test-7176" for this suite.
+Dec 10 10:35:32.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:35:32.217: INFO: namespace kubelet-test-7176 deletion completed in 6.069090602s
+
+• [SLOW TEST:6.229 seconds]
+[k8s.io] Kubelet
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  when scheduling a busybox command that always fails in a pod
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
+    should be possible to delete [NodeConformance] [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide container's memory limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:35:32.217: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename downward-api
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-1083
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
+[It] should provide container's memory limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test downward API volume plugin
+Dec 10 10:35:32.373: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fc94b0de-323f-4ed3-a259-95a5ec765647" in namespace "downward-api-1083" to be "success or failure"
+Dec 10 10:35:32.377: INFO: Pod "downwardapi-volume-fc94b0de-323f-4ed3-a259-95a5ec765647": Phase="Pending", Reason="", readiness=false. Elapsed: 3.101796ms
+Dec 10 10:35:34.380: INFO: Pod "downwardapi-volume-fc94b0de-323f-4ed3-a259-95a5ec765647": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006492922s
+STEP: Saw pod success
+Dec 10 10:35:34.380: INFO: Pod "downwardapi-volume-fc94b0de-323f-4ed3-a259-95a5ec765647" satisfied condition "success or failure"
+Dec 10 10:35:34.383: INFO: Trying to get logs from node dce82 pod downwardapi-volume-fc94b0de-323f-4ed3-a259-95a5ec765647 container client-container: 
+STEP: delete the pod
+Dec 10 10:35:34.396: INFO: Waiting for pod downwardapi-volume-fc94b0de-323f-4ed3-a259-95a5ec765647 to disappear
+Dec 10 10:35:34.398: INFO: Pod downwardapi-volume-fc94b0de-323f-4ed3-a259-95a5ec765647 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:35:34.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-1083" for this suite.
+Dec 10 10:35:40.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:35:40.528: INFO: namespace downward-api-1083 deletion completed in 6.126778855s
+
+• [SLOW TEST:8.311 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
+  should provide container's memory limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide podname only [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:35:40.528: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5301
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should provide podname only [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test downward API volume plugin
+Dec 10 10:35:40.676: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d8a71acd-ffb9-4dcd-b732-1b5dd7ee4edc" in namespace "projected-5301" to be "success or failure"
+Dec 10 10:35:40.678: INFO: Pod "downwardapi-volume-d8a71acd-ffb9-4dcd-b732-1b5dd7ee4edc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.401914ms
+Dec 10 10:35:42.682: INFO: Pod "downwardapi-volume-d8a71acd-ffb9-4dcd-b732-1b5dd7ee4edc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005709027s
+STEP: Saw pod success
+Dec 10 10:35:42.682: INFO: Pod "downwardapi-volume-d8a71acd-ffb9-4dcd-b732-1b5dd7ee4edc" satisfied condition "success or failure"
+Dec 10 10:35:42.684: INFO: Trying to get logs from node dce82 pod downwardapi-volume-d8a71acd-ffb9-4dcd-b732-1b5dd7ee4edc container client-container: 
+STEP: delete the pod
+Dec 10 10:35:42.696: INFO: Waiting for pod downwardapi-volume-d8a71acd-ffb9-4dcd-b732-1b5dd7ee4edc to disappear
+Dec 10 10:35:42.698: INFO: Pod downwardapi-volume-d8a71acd-ffb9-4dcd-b732-1b5dd7ee4edc no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:35:42.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-5301" for this suite.
+Dec 10 10:35:48.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:35:48.781: INFO: namespace projected-5301 deletion completed in 6.080502782s
+
+• [SLOW TEST:8.253 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should provide podname only [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Watchers 
+  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:35:48.781: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename watch
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-9041
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: creating a watch on configmaps with a certain label
+STEP: creating a new configmap
+STEP: modifying the configmap once
+STEP: changing the label value of the configmap
+STEP: Expecting to observe a delete notification for the watched object
+Dec 10 10:35:48.937: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9041,SelfLink:/api/v1/namespaces/watch-9041/configmaps/e2e-watch-test-label-changed,UID:6104578f-432a-4a83-8d85-06d452ba634f,ResourceVersion:368329,Generation:0,CreationTimestamp:2019-12-10 10:35:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
+Dec 10 10:35:48.937: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9041,SelfLink:/api/v1/namespaces/watch-9041/configmaps/e2e-watch-test-label-changed,UID:6104578f-432a-4a83-8d85-06d452ba634f,ResourceVersion:368330,Generation:0,CreationTimestamp:2019-12-10 10:35:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
+Dec 10 10:35:48.938: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9041,SelfLink:/api/v1/namespaces/watch-9041/configmaps/e2e-watch-test-label-changed,UID:6104578f-432a-4a83-8d85-06d452ba634f,ResourceVersion:368331,Generation:0,CreationTimestamp:2019-12-10 10:35:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
+STEP: modifying the configmap a second time
+STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
+STEP: changing the label value of the configmap back
+STEP: modifying the configmap a third time
+STEP: deleting the configmap
+STEP: Expecting to observe an add notification for the watched object when the label value was restored
+Dec 10 10:35:58.956: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9041,SelfLink:/api/v1/namespaces/watch-9041/configmaps/e2e-watch-test-label-changed,UID:6104578f-432a-4a83-8d85-06d452ba634f,ResourceVersion:368352,Generation:0,CreationTimestamp:2019-12-10 10:35:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
+Dec 10 10:35:58.956: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9041,SelfLink:/api/v1/namespaces/watch-9041/configmaps/e2e-watch-test-label-changed,UID:6104578f-432a-4a83-8d85-06d452ba634f,ResourceVersion:368353,Generation:0,CreationTimestamp:2019-12-10 10:35:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
+Dec 10 10:35:58.956: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9041,SelfLink:/api/v1/namespaces/watch-9041/configmaps/e2e-watch-test-label-changed,UID:6104578f-432a-4a83-8d85-06d452ba634f,ResourceVersion:368354,Generation:0,CreationTimestamp:2019-12-10 10:35:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
+[AfterEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:35:58.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "watch-9041" for this suite.
+Dec 10 10:36:04.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:36:05.023: INFO: namespace watch-9041 deletion completed in 6.064099342s
+
+• [SLOW TEST:16.241 seconds]
+[sig-api-machinery] Watchers
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:36:05.023: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename secrets
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-9221
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating secret with name secret-test-bf489e63-063f-4f8c-b857-c54a37f83c6b
+STEP: Creating a pod to test consume secrets
+Dec 10 10:36:05.235: INFO: Waiting up to 5m0s for pod "pod-secrets-a557b375-b963-4176-8586-ad2355a7ed7e" in namespace "secrets-9221" to be "success or failure"
+Dec 10 10:36:05.238: INFO: Pod "pod-secrets-a557b375-b963-4176-8586-ad2355a7ed7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.815074ms
+Dec 10 10:36:07.242: INFO: Pod "pod-secrets-a557b375-b963-4176-8586-ad2355a7ed7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007190452s
+STEP: Saw pod success
+Dec 10 10:36:07.242: INFO: Pod "pod-secrets-a557b375-b963-4176-8586-ad2355a7ed7e" satisfied condition "success or failure"
+Dec 10 10:36:07.245: INFO: Trying to get logs from node dce82 pod pod-secrets-a557b375-b963-4176-8586-ad2355a7ed7e container secret-volume-test: 
+STEP: delete the pod
+Dec 10 10:36:07.263: INFO: Waiting for pod pod-secrets-a557b375-b963-4176-8586-ad2355a7ed7e to disappear
+Dec 10 10:36:07.265: INFO: Pod pod-secrets-a557b375-b963-4176-8586-ad2355a7ed7e no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:36:07.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-9221" for this suite.
+Dec 10 10:36:13.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:36:13.342: INFO: namespace secrets-9221 deletion completed in 6.074480123s
+
+• [SLOW TEST:8.319 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
+  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+S
+------------------------------
+[sig-network] Services 
+  should provide secure master service  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:36:13.342: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename services
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-6277
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
+[It] should provide secure master service  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:36:13.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "services-6277" for this suite.
+Dec 10 10:36:19.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:36:19.562: INFO: namespace services-6277 deletion completed in 6.074950422s
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92
+
+• [SLOW TEST:6.220 seconds]
+[sig-network] Services
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should provide secure master service  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+[sig-storage] HostPath 
+  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] HostPath
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:36:19.562: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename hostpath
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in hostpath-5255
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] HostPath
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
+[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test hostPath mode
+Dec 10 10:36:19.721: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5255" to be "success or failure"
+Dec 10 10:36:19.724: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.038991ms
+Dec 10 10:36:21.728: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00705761s
+Dec 10 10:36:23.733: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011873557s
+STEP: Saw pod success
+Dec 10 10:36:23.733: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
+Dec 10 10:36:23.738: INFO: Trying to get logs from node dce82 pod pod-host-path-test container test-container-1: 
+STEP: delete the pod
+Dec 10 10:36:23.751: INFO: Waiting for pod pod-host-path-test to disappear
+Dec 10 10:36:23.753: INFO: Pod pod-host-path-test no longer exists
+[AfterEach] [sig-storage] HostPath
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:36:23.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "hostpath-5255" for this suite.
+Dec 10 10:36:29.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:36:29.829: INFO: namespace hostpath-5255 deletion completed in 6.073062765s
+
+• [SLOW TEST:10.267 seconds]
+[sig-storage] HostPath
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
+  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected configMap 
+  updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:36:29.829: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6225
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating projection with configMap that has name projected-configmap-test-upd-8e3b2f21-c61f-42d8-8fad-68786834d820
+STEP: Creating the pod
+STEP: Updating configmap projected-configmap-test-upd-8e3b2f21-c61f-42d8-8fad-68786834d820
+STEP: waiting to observe update in volume
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:38:00.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-6225" for this suite.
+Dec 10 10:38:22.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:38:22.458: INFO: namespace projected-6225 deletion completed in 22.070644984s
+
+• [SLOW TEST:112.628 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
+  updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:38:22.458: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename configmap
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-1109
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating configMap with name cm-test-opt-del-c2fcea5c-96ed-49c2-89d9-ed7960a06d29
+STEP: Creating configMap with name cm-test-opt-upd-8f446f50-be67-47f8-acb0-f84e9347c52d
+STEP: Creating the pod
+STEP: Deleting configmap cm-test-opt-del-c2fcea5c-96ed-49c2-89d9-ed7960a06d29
+STEP: Updating configmap cm-test-opt-upd-8f446f50-be67-47f8-acb0-f84e9347c52d
+STEP: Creating configMap with name cm-test-opt-create-553cf3be-45f8-4659-82a2-400aef79bdef
+STEP: waiting to observe update in volume
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:39:51.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-1109" for this suite.
+Dec 10 10:40:13.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:40:13.133: INFO: namespace configmap-1109 deletion completed in 22.070394759s
+
+• [SLOW TEST:110.675 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
+  optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Pods 
+  should support remote command execution over websockets [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:40:13.134: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename pods
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-5691
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
+[It] should support remote command execution over websockets [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+Dec 10 10:40:13.277: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: creating the pod
+STEP: submitting the pod to kubernetes
+[AfterEach] [k8s.io] Pods
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:40:15.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pods-5691" for this suite.
+Dec 10 10:40:53.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:40:53.490: INFO: namespace pods-5691 deletion completed in 38.079813138s
+
+• [SLOW TEST:40.356 seconds]
+[k8s.io] Pods
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  should support remote command execution over websockets [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SS
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:40:53.490: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5360
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating configMap with name projected-configmap-test-volume-map-010868b8-025a-4fd8-ac1c-f2182e1b3fb7
+STEP: Creating a pod to test consume configMaps
+Dec 10 10:40:53.649: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-159d40e0-7893-4d62-a81d-90cca09c3017" in namespace "projected-5360" to be "success or failure"
+Dec 10 10:40:53.651: INFO: Pod "pod-projected-configmaps-159d40e0-7893-4d62-a81d-90cca09c3017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.452037ms
+Dec 10 10:40:55.655: INFO: Pod "pod-projected-configmaps-159d40e0-7893-4d62-a81d-90cca09c3017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005782091s
+STEP: Saw pod success
+Dec 10 10:40:55.655: INFO: Pod "pod-projected-configmaps-159d40e0-7893-4d62-a81d-90cca09c3017" satisfied condition "success or failure"
+Dec 10 10:40:55.657: INFO: Trying to get logs from node dce82 pod pod-projected-configmaps-159d40e0-7893-4d62-a81d-90cca09c3017 container projected-configmap-volume-test: 
+STEP: delete the pod
+Dec 10 10:40:55.671: INFO: Waiting for pod pod-projected-configmaps-159d40e0-7893-4d62-a81d-90cca09c3017 to disappear
+Dec 10 10:40:55.674: INFO: Pod pod-projected-configmaps-159d40e0-7893-4d62-a81d-90cca09c3017 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:40:55.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-5360" for this suite.
+Dec 10 10:41:01.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:41:01.762: INFO: namespace projected-5360 deletion completed in 6.085636177s
+
+• [SLOW TEST:8.272 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Kubelet when scheduling a busybox command in a pod 
+  should print the output to logs [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:41:01.763: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename kubelet-test
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-2199
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
+[It] should print the output to logs [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[AfterEach] [k8s.io] Kubelet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:41:03.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubelet-test-2199" for this suite.
+Dec 10 10:41:54.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:41:54.292: INFO: namespace kubelet-test-2199 deletion completed in 50.371567441s
+
+• [SLOW TEST:52.529 seconds]
+[k8s.io] Kubelet
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  when scheduling a busybox command in a pod
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
+    should print the output to logs [NodeConformance] [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSS
+------------------------------
+[sig-auth] ServiceAccounts 
+  should allow opting out of API token automount  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-auth] ServiceAccounts
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:41:54.292: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename svcaccounts
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-5598
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should allow opting out of API token automount  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: getting the auto-created API token
+Dec 10 10:41:54.949: INFO: created pod pod-service-account-defaultsa
+Dec 10 10:41:54.949: INFO: pod pod-service-account-defaultsa service account token volume mount: true
+Dec 10 10:41:54.953: INFO: created pod pod-service-account-mountsa
+Dec 10 10:41:54.953: INFO: pod pod-service-account-mountsa service account token volume mount: true
+Dec 10 10:41:54.955: INFO: created pod pod-service-account-nomountsa
+Dec 10 10:41:54.955: INFO: pod pod-service-account-nomountsa service account token volume mount: false
+Dec 10 10:41:54.959: INFO: created pod pod-service-account-defaultsa-mountspec
+Dec 10 10:41:54.959: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
+Dec 10 10:41:54.963: INFO: created pod pod-service-account-mountsa-mountspec
+Dec 10 10:41:54.963: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
+Dec 10 10:41:54.966: INFO: created pod pod-service-account-nomountsa-mountspec
+Dec 10 10:41:54.966: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
+Dec 10 10:41:54.970: INFO: created pod pod-service-account-defaultsa-nomountspec
+Dec 10 10:41:54.970: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
+Dec 10 10:41:54.973: INFO: created pod pod-service-account-mountsa-nomountspec
+Dec 10 10:41:54.973: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
+Dec 10 10:41:54.978: INFO: created pod pod-service-account-nomountsa-nomountspec
+Dec 10 10:41:54.978: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
+[AfterEach] [sig-auth] ServiceAccounts
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:41:54.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "svcaccounts-5598" for this suite.
+Dec 10 10:42:16.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:42:17.064: INFO: namespace svcaccounts-5598 deletion completed in 22.081752925s
+
+• [SLOW TEST:22.772 seconds]
+[sig-auth] ServiceAccounts
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
+  should allow opting out of API token automount  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSS
+------------------------------
+[sig-storage] Secrets 
+  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:42:17.064: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename secrets
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-9850
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secret-namespace-7768
+STEP: Creating secret with name secret-test-4c29131a-9e5d-4d1f-b0e6-8537c626b8c4
+STEP: Creating a pod to test consume secrets
+Dec 10 10:42:17.354: INFO: Waiting up to 5m0s for pod "pod-secrets-5d261fee-c090-49a6-ad22-a83b19c3630a" in namespace "secrets-9850" to be "success or failure"
+Dec 10 10:42:17.360: INFO: Pod "pod-secrets-5d261fee-c090-49a6-ad22-a83b19c3630a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.951865ms
+Dec 10 10:42:19.368: INFO: Pod "pod-secrets-5d261fee-c090-49a6-ad22-a83b19c3630a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014180943s
+Dec 10 10:42:21.371: INFO: Pod "pod-secrets-5d261fee-c090-49a6-ad22-a83b19c3630a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017060004s
+STEP: Saw pod success
+Dec 10 10:42:21.371: INFO: Pod "pod-secrets-5d261fee-c090-49a6-ad22-a83b19c3630a" satisfied condition "success or failure"
+Dec 10 10:42:21.373: INFO: Trying to get logs from node dce82 pod pod-secrets-5d261fee-c090-49a6-ad22-a83b19c3630a container secret-volume-test: 
+STEP: delete the pod
+Dec 10 10:42:21.387: INFO: Waiting for pod pod-secrets-5d261fee-c090-49a6-ad22-a83b19c3630a to disappear
+Dec 10 10:42:21.389: INFO: Pod pod-secrets-5d261fee-c090-49a6-ad22-a83b19c3630a no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:42:21.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-9850" for this suite.
+Dec 10 10:42:27.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:42:27.461: INFO: namespace secrets-9850 deletion completed in 6.068598108s
+STEP: Destroying namespace "secret-namespace-7768" for this suite.
+Dec 10 10:42:33.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:42:33.527: INFO: namespace secret-namespace-7768 deletion completed in 6.066055126s
+
+• [SLOW TEST:16.463 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
+  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:42:33.527: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7043
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating configMap with name projected-configmap-test-volume-00db93e8-83ab-471c-9aa4-4c4b12804582
+STEP: Creating a pod to test consume configMaps
+Dec 10 10:42:33.673: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-65acf2bb-cafb-45c0-afb4-b94166909e0a" in namespace "projected-7043" to be "success or failure"
+Dec 10 10:42:33.676: INFO: Pod "pod-projected-configmaps-65acf2bb-cafb-45c0-afb4-b94166909e0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.275207ms
+Dec 10 10:42:35.679: INFO: Pod "pod-projected-configmaps-65acf2bb-cafb-45c0-afb4-b94166909e0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005326336s
+Dec 10 10:42:37.682: INFO: Pod "pod-projected-configmaps-65acf2bb-cafb-45c0-afb4-b94166909e0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008451108s
+STEP: Saw pod success
+Dec 10 10:42:37.682: INFO: Pod "pod-projected-configmaps-65acf2bb-cafb-45c0-afb4-b94166909e0a" satisfied condition "success or failure"
+Dec 10 10:42:37.684: INFO: Trying to get logs from node dce82 pod pod-projected-configmaps-65acf2bb-cafb-45c0-afb4-b94166909e0a container projected-configmap-volume-test: 
+STEP: delete the pod
+Dec 10 10:42:37.696: INFO: Waiting for pod pod-projected-configmaps-65acf2bb-cafb-45c0-afb4-b94166909e0a to disappear
+Dec 10 10:42:37.698: INFO: Pod pod-projected-configmaps-65acf2bb-cafb-45c0-afb4-b94166909e0a no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:42:37.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-7043" for this suite.
+Dec 10 10:42:43.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:42:43.774: INFO: namespace projected-7043 deletion completed in 6.072477402s
+
+• [SLOW TEST:10.246 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+S
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl expose 
+  should create services for rc  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:42:43.774: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename kubectl
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6465
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
+[It] should create services for rc  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: creating Redis RC
+Dec 10 10:42:43.912: INFO: namespace kubectl-6465
+Dec 10 10:42:43.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 create -f - --namespace=kubectl-6465'
+Dec 10 10:42:44.148: INFO: stderr: ""
+Dec 10 10:42:44.148: INFO: stdout: "replicationcontroller/redis-master created\n"
+STEP: Waiting for Redis master to start.
+Dec 10 10:42:45.151: INFO: Selector matched 1 pods for map[app:redis]
+Dec 10 10:42:45.151: INFO: Found 0 / 1
+Dec 10 10:42:46.152: INFO: Selector matched 1 pods for map[app:redis]
+Dec 10 10:42:46.152: INFO: Found 1 / 1
+Dec 10 10:42:46.152: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
+Dec 10 10:42:46.155: INFO: Selector matched 1 pods for map[app:redis]
+Dec 10 10:42:46.155: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
+Dec 10 10:42:46.155: INFO: wait on redis-master startup in kubectl-6465 
+Dec 10 10:42:46.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 logs redis-master-746rb redis-master --namespace=kubectl-6465'
+Dec 10 10:42:46.256: INFO: stderr: ""
+Dec 10 10:42:46.256: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 10 Dec 10:42:45.486 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 10 Dec 10:42:45.486 # Server started, Redis version 3.2.12\n1:M 10 Dec 10:42:45.486 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 10 Dec 10:42:45.486 * The server is now ready to accept connections on port 6379\n"
+STEP: exposing RC
+Dec 10 10:42:46.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6465'
+Dec 10 10:42:46.358: INFO: stderr: ""
+Dec 10 10:42:46.358: INFO: stdout: "service/rm2 exposed\n"
+Dec 10 10:42:46.361: INFO: Service rm2 in namespace kubectl-6465 found.
+STEP: exposing service
+Dec 10 10:42:48.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6465'
+Dec 10 10:42:48.465: INFO: stderr: ""
+Dec 10 10:42:48.465: INFO: stdout: "service/rm3 exposed\n"
+Dec 10 10:42:48.467: INFO: Service rm3 in namespace kubectl-6465 found.
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:42:50.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-6465" for this suite.
+Dec 10 10:43:12.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:43:12.561: INFO: namespace kubectl-6465 deletion completed in 22.083019158s
+
+• [SLOW TEST:28.787 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Kubectl expose
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+    should create services for rc  [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide podname only [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:43:12.561: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename downward-api
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-939
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
+[It] should provide podname only [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test downward API volume plugin
+Dec 10 10:43:12.749: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a30e4fb-ba76-43f7-89f5-d11b82980086" in namespace "downward-api-939" to be "success or failure"
+Dec 10 10:43:12.751: INFO: Pod "downwardapi-volume-0a30e4fb-ba76-43f7-89f5-d11b82980086": Phase="Pending", Reason="", readiness=false. Elapsed: 1.841948ms
+Dec 10 10:43:14.758: INFO: Pod "downwardapi-volume-0a30e4fb-ba76-43f7-89f5-d11b82980086": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00916978s
+STEP: Saw pod success
+Dec 10 10:43:14.758: INFO: Pod "downwardapi-volume-0a30e4fb-ba76-43f7-89f5-d11b82980086" satisfied condition "success or failure"
+Dec 10 10:43:14.760: INFO: Trying to get logs from node dce82 pod downwardapi-volume-0a30e4fb-ba76-43f7-89f5-d11b82980086 container client-container: 
+STEP: delete the pod
+Dec 10 10:43:14.772: INFO: Waiting for pod downwardapi-volume-0a30e4fb-ba76-43f7-89f5-d11b82980086 to disappear
+Dec 10 10:43:14.774: INFO: Pod downwardapi-volume-0a30e4fb-ba76-43f7-89f5-d11b82980086 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:43:14.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-939" for this suite.
+Dec 10 10:43:20.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:43:20.854: INFO: namespace downward-api-939 deletion completed in 6.077027041s
+
+• [SLOW TEST:8.293 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
+  should provide podname only [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-node] ConfigMap 
+  should be consumable via the environment [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-node] ConfigMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:43:20.854: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename configmap
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-827
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable via the environment [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating configMap configmap-827/configmap-test-3785f95c-1809-4fe7-8240-cb912b7febd0
+STEP: Creating a pod to test consume configMaps
+Dec 10 10:43:21.007: INFO: Waiting up to 5m0s for pod "pod-configmaps-5bfcf623-cb4c-490a-bd25-e2b85dd26d9b" in namespace "configmap-827" to be "success or failure"
+Dec 10 10:43:21.009: INFO: Pod "pod-configmaps-5bfcf623-cb4c-490a-bd25-e2b85dd26d9b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.952492ms
+Dec 10 10:43:23.011: INFO: Pod "pod-configmaps-5bfcf623-cb4c-490a-bd25-e2b85dd26d9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004700874s
+STEP: Saw pod success
+Dec 10 10:43:23.012: INFO: Pod "pod-configmaps-5bfcf623-cb4c-490a-bd25-e2b85dd26d9b" satisfied condition "success or failure"
+Dec 10 10:43:23.013: INFO: Trying to get logs from node dce82 pod pod-configmaps-5bfcf623-cb4c-490a-bd25-e2b85dd26d9b container env-test: 
+STEP: delete the pod
+Dec 10 10:43:23.029: INFO: Waiting for pod pod-configmaps-5bfcf623-cb4c-490a-bd25-e2b85dd26d9b to disappear
+Dec 10 10:43:23.033: INFO: Pod pod-configmaps-5bfcf623-cb4c-490a-bd25-e2b85dd26d9b no longer exists
+[AfterEach] [sig-node] ConfigMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:43:23.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-827" for this suite.
+Dec 10 10:43:29.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:43:29.112: INFO: namespace configmap-827 deletion completed in 6.074714009s
+
+• [SLOW TEST:8.258 seconds]
+[sig-node] ConfigMap
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
+  should be consumable via the environment [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:43:29.113: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8799
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test downward API volume plugin
+Dec 10 10:43:29.264: INFO: Waiting up to 5m0s for pod "downwardapi-volume-121c354f-b8d3-407a-992b-976c62a5d34d" in namespace "projected-8799" to be "success or failure"
+Dec 10 10:43:29.266: INFO: Pod "downwardapi-volume-121c354f-b8d3-407a-992b-976c62a5d34d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012301ms
+Dec 10 10:43:31.269: INFO: Pod "downwardapi-volume-121c354f-b8d3-407a-992b-976c62a5d34d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004798646s
+STEP: Saw pod success
+Dec 10 10:43:31.269: INFO: Pod "downwardapi-volume-121c354f-b8d3-407a-992b-976c62a5d34d" satisfied condition "success or failure"
+Dec 10 10:43:31.271: INFO: Trying to get logs from node dce82 pod downwardapi-volume-121c354f-b8d3-407a-992b-976c62a5d34d container client-container: 
+STEP: delete the pod
+Dec 10 10:43:31.285: INFO: Waiting for pod downwardapi-volume-121c354f-b8d3-407a-992b-976c62a5d34d to disappear
+Dec 10 10:43:31.287: INFO: Pod downwardapi-volume-121c354f-b8d3-407a-992b-976c62a5d34d no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:43:31.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-8799" for this suite.
+Dec 10 10:43:37.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:43:37.372: INFO: namespace projected-8799 deletion completed in 6.081638191s
+
+• [SLOW TEST:8.259 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:43:37.372: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename emptydir
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-1237
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test emptydir volume type on node default medium
+Dec 10 10:43:37.511: INFO: Waiting up to 5m0s for pod "pod-a7f324da-1984-4009-a6b6-09a2c837f324" in namespace "emptydir-1237" to be "success or failure"
+Dec 10 10:43:37.513: INFO: Pod "pod-a7f324da-1984-4009-a6b6-09a2c837f324": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181163ms
+Dec 10 10:43:39.517: INFO: Pod "pod-a7f324da-1984-4009-a6b6-09a2c837f324": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005799657s
+STEP: Saw pod success
+Dec 10 10:43:39.517: INFO: Pod "pod-a7f324da-1984-4009-a6b6-09a2c837f324" satisfied condition "success or failure"
+Dec 10 10:43:39.519: INFO: Trying to get logs from node dce82 pod pod-a7f324da-1984-4009-a6b6-09a2c837f324 container test-container: 
+STEP: delete the pod
+Dec 10 10:43:39.534: INFO: Waiting for pod pod-a7f324da-1984-4009-a6b6-09a2c837f324 to disappear
+Dec 10 10:43:39.537: INFO: Pod pod-a7f324da-1984-4009-a6b6-09a2c837f324 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:43:39.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-1237" for this suite.
+Dec 10 10:43:45.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:43:45.608: INFO: namespace emptydir-1237 deletion completed in 6.066880683s
+
+• [SLOW TEST:8.237 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] Job 
+  should delete a job [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-apps] Job
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:43:45.610: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename job
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-9061
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should delete a job [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a job
+STEP: Ensuring active pods == parallelism
+STEP: delete a job
+STEP: deleting Job.batch foo in namespace job-9061, will wait for the garbage collector to delete the pods
+Dec 10 10:43:47.817: INFO: Deleting Job.batch foo took: 5.737392ms
+Dec 10 10:43:48.217: INFO: Terminating Job.batch foo pods took: 400.2625ms
+STEP: Ensuring job was deleted
+[AfterEach] [sig-apps] Job
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:44:22.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "job-9061" for this suite.
+Dec 10 10:44:28.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:44:28.401: INFO: namespace job-9061 deletion completed in 6.076230566s
+
+• [SLOW TEST:42.791 seconds]
+[sig-apps] Job
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should delete a job [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Update Demo 
+  should create and stop a replication controller  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:44:28.401: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename kubectl
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7058
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
+[BeforeEach] [k8s.io] Update Demo
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
+[It] should create and stop a replication controller  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: creating a replication controller
+Dec 10 10:44:28.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 create -f - --namespace=kubectl-7058'
+Dec 10 10:44:28.704: INFO: stderr: ""
+Dec 10 10:44:28.704: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
+STEP: waiting for all containers in name=update-demo pods to come up.
+Dec 10 10:44:28.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7058'
+Dec 10 10:44:28.788: INFO: stderr: ""
+Dec 10 10:44:28.788: INFO: stdout: "update-demo-nautilus-prqs5 update-demo-nautilus-s9dp8 "
+Dec 10 10:44:28.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-prqs5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7058'
+Dec 10 10:44:28.870: INFO: stderr: ""
+Dec 10 10:44:28.870: INFO: stdout: ""
+Dec 10 10:44:28.870: INFO: update-demo-nautilus-prqs5 is created but not running
+Dec 10 10:44:33.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7058'
+Dec 10 10:44:33.954: INFO: stderr: ""
+Dec 10 10:44:33.954: INFO: stdout: "update-demo-nautilus-prqs5 update-demo-nautilus-s9dp8 "
+Dec 10 10:44:33.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-prqs5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7058'
+Dec 10 10:44:34.034: INFO: stderr: ""
+Dec 10 10:44:34.034: INFO: stdout: "true"
+Dec 10 10:44:34.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-prqs5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7058'
+Dec 10 10:44:34.108: INFO: stderr: ""
+Dec 10 10:44:34.109: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Dec 10 10:44:34.109: INFO: validating pod update-demo-nautilus-prqs5
+Dec 10 10:44:34.114: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Dec 10 10:44:34.114: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Dec 10 10:44:34.114: INFO: update-demo-nautilus-prqs5 is verified up and running
+Dec 10 10:44:34.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-s9dp8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7058'
+Dec 10 10:44:34.197: INFO: stderr: ""
+Dec 10 10:44:34.197: INFO: stdout: "true"
+Dec 10 10:44:34.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-s9dp8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7058'
+Dec 10 10:44:34.285: INFO: stderr: ""
+Dec 10 10:44:34.285: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Dec 10 10:44:34.285: INFO: validating pod update-demo-nautilus-s9dp8
+Dec 10 10:44:34.289: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Dec 10 10:44:34.289: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Dec 10 10:44:34.289: INFO: update-demo-nautilus-s9dp8 is verified up and running
+STEP: using delete to clean up resources
+Dec 10 10:44:34.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 delete --grace-period=0 --force -f - --namespace=kubectl-7058'
+Dec 10 10:44:34.358: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Dec 10 10:44:34.358: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
+Dec 10 10:44:34.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7058'
+Dec 10 10:44:34.454: INFO: stderr: "No resources found.\n"
+Dec 10 10:44:34.454: INFO: stdout: ""
+Dec 10 10:44:34.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods -l name=update-demo --namespace=kubectl-7058 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
+Dec 10 10:44:34.539: INFO: stderr: ""
+Dec 10 10:44:34.539: INFO: stdout: "update-demo-nautilus-prqs5\nupdate-demo-nautilus-s9dp8\n"
+Dec 10 10:44:35.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7058'
+Dec 10 10:44:35.119: INFO: stderr: "No resources found.\n"
+Dec 10 10:44:35.119: INFO: stdout: ""
+Dec 10 10:44:35.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods -l name=update-demo --namespace=kubectl-7058 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
+Dec 10 10:44:35.205: INFO: stderr: ""
+Dec 10 10:44:35.205: INFO: stdout: ""
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:44:35.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-7058" for this suite.
+Dec 10 10:44:41.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:44:41.301: INFO: namespace kubectl-7058 deletion completed in 6.091888335s
+
+• [SLOW TEST:12.900 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Update Demo
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+    should create and stop a replication controller  [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSS
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:44:41.302: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename gc
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-9629
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: create the deployment
+STEP: Wait for the Deployment to create new ReplicaSet
+STEP: delete the deployment
+STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
+STEP: Gathering metrics
+Dec 10 10:45:11.468: INFO: For apiserver_request_total:
+For apiserver_request_latencies_summary:
+For apiserver_init_events_total:
+For garbage_collector_attempt_to_delete_queue_latency:
+For garbage_collector_attempt_to_delete_work_duration:
+For garbage_collector_attempt_to_orphan_queue_latency:
+For garbage_collector_attempt_to_orphan_work_duration:
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+For garbage_collector_graph_changes_work_duration:
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+For namespace_queue_latency_sum:
+For namespace_queue_latency_count:
+For namespace_retries:
+For namespace_work_duration:
+For namespace_work_duration_sum:
+For namespace_work_duration_count:
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:45:11.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+W1210 10:45:11.468749      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
+STEP: Destroying namespace "gc-9629" for this suite.
+Dec 10 10:45:17.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:45:17.547: INFO: namespace gc-9629 deletion completed in 6.07494496s
+
+• [SLOW TEST:36.245 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl run job 
+  should create a job from an image when restart is OnFailure  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:45:17.547: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename kubectl
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7384
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
+[BeforeEach] [k8s.io] Kubectl run job
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1613
+[It] should create a job from an image when restart is OnFailure  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: running the image docker.io/library/nginx:1.14-alpine
+Dec 10 10:45:17.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7384'
+Dec 10 10:45:17.777: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
+Dec 10 10:45:17.777: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
+STEP: verifying the job e2e-test-nginx-job was created
+[AfterEach] [k8s.io] Kubectl run job
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1618
+Dec 10 10:45:17.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 delete jobs e2e-test-nginx-job --namespace=kubectl-7384'
+Dec 10 10:45:17.860: INFO: stderr: ""
+Dec 10 10:45:17.860: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:45:17.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-7384" for this suite.
+Dec 10 10:45:23.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:45:23.924: INFO: namespace kubectl-7384 deletion completed in 6.060503672s
+
+• [SLOW TEST:6.377 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Kubectl run job
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+    should create a job from an image when restart is OnFailure  [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Pods 
+  should be submitted and removed [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:45:23.925: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename pods
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-1019
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
+[It] should be submitted and removed [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: creating the pod
+STEP: setting up watch
+STEP: submitting the pod to kubernetes
+Dec 10 10:45:24.075: INFO: observed the pod list
+STEP: verifying the pod is in kubernetes
+STEP: verifying pod creation was observed
+STEP: deleting the pod gracefully
+STEP: verifying the kubelet observed the termination notice
+STEP: verifying pod deletion was observed
+[AfterEach] [k8s.io] Pods
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:45:31.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pods-1019" for this suite.
+Dec 10 10:45:37.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:45:37.990: INFO: namespace pods-1019 deletion completed in 6.083445464s
+
+• [SLOW TEST:14.065 seconds]
+[k8s.io] Pods
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  should be submitted and removed [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected secret 
+  optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Projected secret
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:45:37.990: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9304
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating secret with name s-test-opt-del-6364068a-e96d-47de-9826-8ba4c870b088
+STEP: Creating secret with name s-test-opt-upd-600aa0ea-bad0-4e34-944e-f77edc499eaf
+STEP: Creating the pod
+STEP: Deleting secret s-test-opt-del-6364068a-e96d-47de-9826-8ba4c870b088
+STEP: Updating secret s-test-opt-upd-600aa0ea-bad0-4e34-944e-f77edc499eaf
+STEP: Creating secret with name s-test-opt-create-91d0ce91-0198-45ad-8a2c-39e7c0a299eb
+STEP: waiting to observe update in volume
+[AfterEach] [sig-storage] Projected secret
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:47:12.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-9304" for this suite.
+Dec 10 10:47:34.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:47:34.690: INFO: namespace projected-9304 deletion completed in 22.088207045s
+
+• [SLOW TEST:116.700 seconds]
+[sig-storage] Projected secret
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
+  optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+S
+------------------------------
+[sig-api-machinery] Namespaces [Serial] 
+  should ensure that all services are removed when a namespace is deleted [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-api-machinery] Namespaces [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:47:34.690: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename namespaces
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-2297
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should ensure that all services are removed when a namespace is deleted [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a test namespace
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-2294
+STEP: Waiting for a default service account to be provisioned in namespace
+STEP: Creating a service in the namespace
+STEP: Deleting the namespace
+STEP: Waiting for the namespace to be removed.
+STEP: Recreating the namespace
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-94
+STEP: Verifying there is no service in the namespace
+[AfterEach] [sig-api-machinery] Namespaces [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:47:41.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "namespaces-2297" for this suite.
+Dec 10 10:47:47.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:47:47.268: INFO: namespace namespaces-2297 deletion completed in 6.12293516s
+STEP: Destroying namespace "nsdeletetest-2294" for this suite.
+Dec 10 10:47:47.271: INFO: Namespace nsdeletetest-2294 was already deleted
+STEP: Destroying namespace "nsdeletetest-94" for this suite.
+Dec 10 10:47:53.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:47:53.348: INFO: namespace nsdeletetest-94 deletion completed in 6.076738094s
+
+• [SLOW TEST:18.658 seconds]
+[sig-api-machinery] Namespaces [Serial]
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should ensure that all services are removed when a namespace is deleted [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSS
+------------------------------
+[sig-network] Networking Granular Checks: Pods 
+  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-network] Networking
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:47:53.348: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename pod-network-test
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-410
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Performing setup for networking test in namespace pod-network-test-410
+STEP: creating a selector
+STEP: Creating the service pods in kubernetes
+Dec 10 10:47:53.505: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
+STEP: Creating test pods
+Dec 10 10:48:17.577: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.28.8.97:8080/dial?request=hostName&protocol=udp&host=172.28.194.243&port=8081&tries=1'] Namespace:pod-network-test-410 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Dec 10 10:48:17.577: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+Dec 10 10:48:17.705: INFO: Waiting for endpoints: map[]
+Dec 10 10:48:17.707: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.28.8.97:8080/dial?request=hostName&protocol=udp&host=172.28.8.93&port=8081&tries=1'] Namespace:pod-network-test-410 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Dec 10 10:48:17.707: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+Dec 10 10:48:17.831: INFO: Waiting for endpoints: map[]
+Dec 10 10:48:17.835: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.28.8.97:8080/dial?request=hostName&protocol=udp&host=172.28.104.199&port=8081&tries=1'] Namespace:pod-network-test-410 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Dec 10 10:48:17.835: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+Dec 10 10:48:17.967: INFO: Waiting for endpoints: map[]
+[AfterEach] [sig-network] Networking
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:48:17.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pod-network-test-410" for this suite.
+Dec 10 10:48:39.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:48:40.060: INFO: namespace pod-network-test-410 deletion completed in 22.08828786s
+
+• [SLOW TEST:46.711 seconds]
+[sig-network] Networking
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
+  Granular Checks: Pods
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
+    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:48:40.060: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename emptydir
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-6842
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test emptydir volume type on tmpfs
+Dec 10 10:48:40.210: INFO: Waiting up to 5m0s for pod "pod-5d7f19e0-5c18-41a6-835f-0fd9e4abb634" in namespace "emptydir-6842" to be "success or failure"
+Dec 10 10:48:40.242: INFO: Pod "pod-5d7f19e0-5c18-41a6-835f-0fd9e4abb634": Phase="Pending", Reason="", readiness=false. Elapsed: 31.813892ms
+Dec 10 10:48:42.246: INFO: Pod "pod-5d7f19e0-5c18-41a6-835f-0fd9e4abb634": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.035935119s
+STEP: Saw pod success
+Dec 10 10:48:42.246: INFO: Pod "pod-5d7f19e0-5c18-41a6-835f-0fd9e4abb634" satisfied condition "success or failure"
+Dec 10 10:48:42.249: INFO: Trying to get logs from node dce82 pod pod-5d7f19e0-5c18-41a6-835f-0fd9e4abb634 container test-container: 
+STEP: delete the pod
+Dec 10 10:48:42.264: INFO: Waiting for pod pod-5d7f19e0-5c18-41a6-835f-0fd9e4abb634 to disappear
+Dec 10 10:48:42.267: INFO: Pod pod-5d7f19e0-5c18-41a6-835f-0fd9e4abb634 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:48:42.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-6842" for this suite.
+Dec 10 10:48:48.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:48:48.357: INFO: namespace emptydir-6842 deletion completed in 6.085246661s
+
+• [SLOW TEST:8.297 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:48:48.357: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename emptydir
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-1174
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test emptydir 0644 on tmpfs
+Dec 10 10:48:48.503: INFO: Waiting up to 5m0s for pod "pod-ae21d358-f734-4a37-97ea-34856dfba49c" in namespace "emptydir-1174" to be "success or failure"
+Dec 10 10:48:48.504: INFO: Pod "pod-ae21d358-f734-4a37-97ea-34856dfba49c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.596498ms
+Dec 10 10:48:50.507: INFO: Pod "pod-ae21d358-f734-4a37-97ea-34856dfba49c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004538777s
+STEP: Saw pod success
+Dec 10 10:48:50.507: INFO: Pod "pod-ae21d358-f734-4a37-97ea-34856dfba49c" satisfied condition "success or failure"
+Dec 10 10:48:50.510: INFO: Trying to get logs from node dce82 pod pod-ae21d358-f734-4a37-97ea-34856dfba49c container test-container: 
+STEP: delete the pod
+Dec 10 10:48:50.527: INFO: Waiting for pod pod-ae21d358-f734-4a37-97ea-34856dfba49c to disappear
+Dec 10 10:48:50.530: INFO: Pod pod-ae21d358-f734-4a37-97ea-34856dfba49c no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:48:50.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-1174" for this suite.
+Dec 10 10:48:56.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:48:56.623: INFO: namespace emptydir-1174 deletion completed in 6.088034674s
+
+• [SLOW TEST:8.266 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
+  should check if Kubernetes master services is included in cluster-info  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:48:56.623: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename kubectl
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-4549
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
+[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: validating cluster-info
+Dec 10 10:48:56.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 cluster-info'
+Dec 10 10:48:56.870: INFO: stderr: ""
+Dec 10 10:48:56.870: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://10.96.0.1:443\x1b[0m\n\x1b[0;32mcoredns\x1b[0m is running at \x1b[0;33mhttps://10.96.0.1:443/api/v1/namespaces/kube-system/services/coredns:dns/proxy\x1b[0m\n\x1b[0;32mcoredns-second\x1b[0m is running at \x1b[0;33mhttps://10.96.0.1:443/api/v1/namespaces/kube-system/services/coredns-second:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:48:56.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-4549" for this suite.
+Dec 10 10:49:02.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:49:02.960: INFO: namespace kubectl-4549 deletion completed in 6.084752961s
+
+• [SLOW TEST:6.337 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Kubectl cluster-info
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+    should check if Kubernetes master services is included in cluster-info  [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SS
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:49:02.960: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename configmap
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-3484
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating configMap with name configmap-test-volume-map-a8f9128a-2842-45f7-ace8-e09b5d8ae3e5
+STEP: Creating a pod to test consume configMaps
+Dec 10 10:49:03.113: INFO: Waiting up to 5m0s for pod "pod-configmaps-03ab76fa-cf62-48bb-a83a-7367e1fc5da0" in namespace "configmap-3484" to be "success or failure"
+Dec 10 10:49:03.114: INFO: Pod "pod-configmaps-03ab76fa-cf62-48bb-a83a-7367e1fc5da0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.931894ms
+Dec 10 10:49:05.119: INFO: Pod "pod-configmaps-03ab76fa-cf62-48bb-a83a-7367e1fc5da0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006637926s
+STEP: Saw pod success
+Dec 10 10:49:05.119: INFO: Pod "pod-configmaps-03ab76fa-cf62-48bb-a83a-7367e1fc5da0" satisfied condition "success or failure"
+Dec 10 10:49:05.122: INFO: Trying to get logs from node dce82 pod pod-configmaps-03ab76fa-cf62-48bb-a83a-7367e1fc5da0 container configmap-volume-test: 
+STEP: delete the pod
+Dec 10 10:49:05.142: INFO: Waiting for pod pod-configmaps-03ab76fa-cf62-48bb-a83a-7367e1fc5da0 to disappear
+Dec 10 10:49:05.145: INFO: Pod pod-configmaps-03ab76fa-cf62-48bb-a83a-7367e1fc5da0 no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:49:05.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-3484" for this suite.
+Dec 10 10:49:11.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:49:11.353: INFO: namespace configmap-3484 deletion completed in 6.204154656s
+
+• [SLOW TEST:8.393 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
+  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
+  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:49:11.353: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename kubelet-test
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-1634
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
+[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[AfterEach] [k8s.io] Kubelet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:49:15.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubelet-test-1634" for this suite.
+Dec 10 10:49:53.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:49:53.606: INFO: namespace kubelet-test-1634 deletion completed in 38.080337281s
+
+• [SLOW TEST:42.253 seconds]
+[k8s.io] Kubelet
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  when scheduling a busybox Pod with hostAliases
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
+    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:49:53.607: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename downward-api
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-83
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
+[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test downward API volume plugin
+Dec 10 10:49:53.766: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cba940de-de88-4d7b-87a0-29d9d93c541c" in namespace "downward-api-83" to be "success or failure"
+Dec 10 10:49:53.770: INFO: Pod "downwardapi-volume-cba940de-de88-4d7b-87a0-29d9d93c541c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.573751ms
+Dec 10 10:49:55.774: INFO: Pod "downwardapi-volume-cba940de-de88-4d7b-87a0-29d9d93c541c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007949772s
+STEP: Saw pod success
+Dec 10 10:49:55.774: INFO: Pod "downwardapi-volume-cba940de-de88-4d7b-87a0-29d9d93c541c" satisfied condition "success or failure"
+Dec 10 10:49:55.777: INFO: Trying to get logs from node dce82 pod downwardapi-volume-cba940de-de88-4d7b-87a0-29d9d93c541c container client-container: 
+STEP: delete the pod
+Dec 10 10:49:55.793: INFO: Waiting for pod downwardapi-volume-cba940de-de88-4d7b-87a0-29d9d93c541c to disappear
+Dec 10 10:49:55.795: INFO: Pod downwardapi-volume-cba940de-de88-4d7b-87a0-29d9d93c541c no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:49:55.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-83" for this suite.
+Dec 10 10:50:01.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:50:01.870: INFO: namespace downward-api-83 deletion completed in 6.071969717s
+
+• [SLOW TEST:8.263 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
+  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:50:01.870: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7699
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating the pod
+Dec 10 10:50:04.540: INFO: Successfully updated pod "annotationupdatec8313e2b-783b-4b9a-94ef-10406784e090"
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:50:06.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-7699" for this suite.
+Dec 10 10:50:28.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:50:28.639: INFO: namespace projected-7699 deletion completed in 22.075073238s
+
+• [SLOW TEST:26.769 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSS
+------------------------------
+[k8s.io] Variable Expansion 
+  should allow composing env vars into new env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Variable Expansion
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:50:28.639: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename var-expansion
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-8457
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test env composition
+Dec 10 10:50:28.789: INFO: Waiting up to 5m0s for pod "var-expansion-9d283d33-330e-4487-adfd-8ef57e8d4fdb" in namespace "var-expansion-8457" to be "success or failure"
+Dec 10 10:50:28.791: INFO: Pod "var-expansion-9d283d33-330e-4487-adfd-8ef57e8d4fdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235035ms
+Dec 10 10:50:30.794: INFO: Pod "var-expansion-9d283d33-330e-4487-adfd-8ef57e8d4fdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004886596s
+STEP: Saw pod success
+Dec 10 10:50:30.794: INFO: Pod "var-expansion-9d283d33-330e-4487-adfd-8ef57e8d4fdb" satisfied condition "success or failure"
+Dec 10 10:50:30.796: INFO: Trying to get logs from node dce82 pod var-expansion-9d283d33-330e-4487-adfd-8ef57e8d4fdb container dapi-container: 
+STEP: delete the pod
+Dec 10 10:50:30.806: INFO: Waiting for pod var-expansion-9d283d33-330e-4487-adfd-8ef57e8d4fdb to disappear
+Dec 10 10:50:30.808: INFO: Pod var-expansion-9d283d33-330e-4487-adfd-8ef57e8d4fdb no longer exists
+[AfterEach] [k8s.io] Variable Expansion
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:50:30.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "var-expansion-8457" for this suite.
+Dec 10 10:50:36.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:50:36.902: INFO: namespace var-expansion-8457 deletion completed in 6.091486429s
+
+• [SLOW TEST:8.263 seconds]
+[k8s.io] Variable Expansion
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  should allow composing env vars into new env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Kubelet when scheduling a read only busybox container 
+  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:50:36.902: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename kubelet-test
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-3693
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
+[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[AfterEach] [k8s.io] Kubelet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:50:41.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubelet-test-3693" for this suite.
+Dec 10 10:51:19.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:51:19.180: INFO: namespace kubelet-test-3693 deletion completed in 38.098153104s
+
+• [SLOW TEST:42.278 seconds]
+[k8s.io] Kubelet
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  when scheduling a read only busybox container
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
+    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] Deployment 
+  RecreateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:51:19.181: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename deployment
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-9444
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
+[It] RecreateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+Dec 10 10:51:19.319: INFO: Creating deployment "test-recreate-deployment"
+Dec 10 10:51:19.323: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
+Dec 10 10:51:19.330: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
+Dec 10 10:51:21.340: INFO: Waiting deployment "test-recreate-deployment" to complete
+Dec 10 10:51:21.343: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
+Dec 10 10:51:21.351: INFO: Updating deployment test-recreate-deployment
+Dec 10 10:51:21.351: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
+Dec 10 10:51:21.393: INFO: Deployment "test-recreate-deployment":
+&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-9444,SelfLink:/apis/apps/v1/namespaces/deployment-9444/deployments/test-recreate-deployment,UID:a5abe536-c7af-4665-b5b1-62e99abbcf5f,ResourceVersion:371804,Generation:2,CreationTimestamp:2019-12-10 10:51:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-10 10:51:21 +0000 UTC 2019-12-10 10:51:21 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-10 10:51:21 +0000 UTC 2019-12-10 10:51:19 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}
+
+Dec 10 10:51:21.395: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
+&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-9444,SelfLink:/apis/apps/v1/namespaces/deployment-9444/replicasets/test-recreate-deployment-5c8c9cc69d,UID:0c8baa05-96b5-4eaa-8401-c26d59257625,ResourceVersion:371802,Generation:1,CreationTimestamp:2019-12-10 10:51:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment a5abe536-c7af-4665-b5b1-62e99abbcf5f 0xc0028c7f17 0xc0028c7f18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
+Dec 10 10:51:21.395: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
+Dec 10 10:51:21.395: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-9444,SelfLink:/apis/apps/v1/namespaces/deployment-9444/replicasets/test-recreate-deployment-6df85df6b9,UID:e135042f-5cde-4b69-a45f-ff6b478aee50,ResourceVersion:371792,Generation:2,CreationTimestamp:2019-12-10 10:51:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment a5abe536-c7af-4665-b5b1-62e99abbcf5f 0xc0028c7fe7 0xc0028c7fe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
+Dec 10 10:51:21.397: INFO: Pod "test-recreate-deployment-5c8c9cc69d-hwlm8" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-hwlm8,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-9444,SelfLink:/api/v1/namespaces/deployment-9444/pods/test-recreate-deployment-5c8c9cc69d-hwlm8,UID:4dba23fc-f15b-4817-8074-3667a116537e,ResourceVersion:371805,Generation:0,CreationTimestamp:2019-12-10 10:51:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 0c8baa05-96b5-4eaa-8401-c26d59257625 0xc002e108c7 0xc002e108c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-52svc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-52svc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-52svc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce82,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002e10940} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002e10960}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:51:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 10:51:21 +0000 UTC  }],Message:,Reason:,HostIP:10.6.135.82,PodIP:,StartTime:2019-12-10 10:51:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:51:21.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "deployment-9444" for this suite.
+Dec 10 10:51:27.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:51:27.494: INFO: namespace deployment-9444 deletion completed in 6.094552336s
+
+• [SLOW TEST:8.313 seconds]
+[sig-apps] Deployment
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  RecreateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+S
+------------------------------
+[sig-storage] Secrets 
+  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:51:27.494: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename secrets
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-7307
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating secret with name secret-test-map-07874a9d-8b29-4446-8cff-5a47287ea4db
+STEP: Creating a pod to test consume secrets
+Dec 10 10:51:27.647: INFO: Waiting up to 5m0s for pod "pod-secrets-f3b54c07-e927-4707-9aa7-f03ff929efe6" in namespace "secrets-7307" to be "success or failure"
+Dec 10 10:51:27.650: INFO: Pod "pod-secrets-f3b54c07-e927-4707-9aa7-f03ff929efe6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.600205ms
+Dec 10 10:51:29.654: INFO: Pod "pod-secrets-f3b54c07-e927-4707-9aa7-f03ff929efe6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006256053s
+STEP: Saw pod success
+Dec 10 10:51:29.654: INFO: Pod "pod-secrets-f3b54c07-e927-4707-9aa7-f03ff929efe6" satisfied condition "success or failure"
+Dec 10 10:51:29.658: INFO: Trying to get logs from node dce82 pod pod-secrets-f3b54c07-e927-4707-9aa7-f03ff929efe6 container secret-volume-test: 
+STEP: delete the pod
+Dec 10 10:51:29.674: INFO: Waiting for pod pod-secrets-f3b54c07-e927-4707-9aa7-f03ff929efe6 to disappear
+Dec 10 10:51:29.677: INFO: Pod pod-secrets-f3b54c07-e927-4707-9aa7-f03ff929efe6 no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:51:29.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-7307" for this suite.
+Dec 10 10:51:35.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:51:35.771: INFO: namespace secrets-7307 deletion completed in 6.090568688s
+
+• [SLOW TEST:8.277 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
+  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-scheduling] SchedulerPredicates [Serial] 
+  validates that NodeSelector is respected if not matching  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:51:35.771: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename sched-pred
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-9640
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
+Dec 10 10:51:35.910: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
+Dec 10 10:51:35.917: INFO: Waiting for terminating namespaces to be deleted...
+Dec 10 10:51:35.919: INFO: 
+Logging pods the kubelet thinks is on node dce81 before test
+Dec 10 10:51:35.929: INFO: calico-kube-controllers-6b7d5ffdd4-x65qw from kube-system started at 2019-12-08 10:38:02 +0000 UTC (1 container statuses recorded)
+Dec 10 10:51:35.929: INFO: 	Container calico-kube-controllers ready: true, restart count 2
+Dec 10 10:51:35.929: INFO: node-local-dns-cv2r5 from kube-system started at 2019-12-10 09:33:54 +0000 UTC (1 container statuses recorded)
+Dec 10 10:51:35.929: INFO: 	Container node-cache ready: true, restart count 0
+Dec 10 10:51:35.929: INFO: calico-node-zj8bt from kube-system started at 2019-12-08 10:37:37 +0000 UTC (2 container statuses recorded)
+Dec 10 10:51:35.929: INFO: 	Container calico-node ready: true, restart count 2
+Dec 10 10:51:35.929: INFO: 	Container install-cni ready: true, restart count 2
+Dec 10 10:51:35.929: INFO: kube-proxy-lc4c7 from kube-system started at 2019-12-08 10:37:38 +0000 UTC (1 container statuses recorded)
+Dec 10 10:51:35.929: INFO: 	Container kube-proxy ready: true, restart count 2
+Dec 10 10:51:35.929: INFO: dce-prometheus-698b884db7-5vrk2 from kube-system started at 2019-12-09 03:02:00 +0000 UTC (1 container statuses recorded)
+Dec 10 10:51:35.929: INFO: 	Container dce-prometheus ready: true, restart count 0
+Dec 10 10:51:35.929: INFO: smokeping-drpdh from kube-system started at 2019-12-08 10:38:00 +0000 UTC (1 container statuses recorded)
+Dec 10 10:51:35.929: INFO: 	Container smokeping ready: true, restart count 1
+Dec 10 10:51:35.929: INFO: dce-cloud-provider-manager-rtcrj from kube-system started at 2019-12-08 10:37:37 +0000 UTC (1 container statuses recorded)
+Dec 10 10:51:35.929: INFO: 	Container dce-cloud-provider ready: true, restart count 2
+Dec 10 10:51:35.929: INFO: dce-chart-manager-797958bcff-v2wfh from kube-system started at 2019-12-08 10:38:00 +0000 UTC (1 container statuses recorded)
+Dec 10 10:51:35.929: INFO: 	Container chart-manager ready: true, restart count 1
+Dec 10 10:51:35.929: INFO: sonobuoy-systemd-logs-daemon-set-ea02895db0f74bf1-dhl7h from sonobuoy started at 2019-12-10 09:57:15 +0000 UTC (2 container statuses recorded)
+Dec 10 10:51:35.929: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Dec 10 10:51:35.929: INFO: 	Container systemd-logs ready: true, restart count 0
+Dec 10 10:51:35.929: INFO: 
+Logging pods the kubelet thinks is on node dce82 before test
+Dec 10 10:51:35.939: INFO: sonobuoy-e2e-job-3fef55150259473e from sonobuoy started at 2019-12-10 09:57:15 +0000 UTC (2 container statuses recorded)
+Dec 10 10:51:35.939: INFO: 	Container e2e ready: true, restart count 0
+Dec 10 10:51:35.939: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Dec 10 10:51:35.939: INFO: node-local-dns-jwvds from kube-system started at 2019-12-10 09:33:54 +0000 UTC (1 container statuses recorded)
+Dec 10 10:51:35.939: INFO: 	Container node-cache ready: true, restart count 0
+Dec 10 10:51:35.939: INFO: calico-node-6bfc2 from kube-system started at 2019-12-09 02:46:32 +0000 UTC (2 container statuses recorded)
+Dec 10 10:51:35.939: INFO: 	Container calico-node ready: true, restart count 1
+Dec 10 10:51:35.939: INFO: 	Container install-cni ready: true, restart count 1
+Dec 10 10:51:35.939: INFO: kube-proxy-gdkmh from kube-system started at 2019-12-09 02:46:32 +0000 UTC (1 container statuses recorded)
+Dec 10 10:51:35.939: INFO: 	Container kube-proxy ready: true, restart count 2
+Dec 10 10:51:35.939: INFO: sonobuoy from sonobuoy started at 2019-12-10 09:57:13 +0000 UTC (1 container statuses recorded)
+Dec 10 10:51:35.939: INFO: 	Container kube-sonobuoy ready: true, restart count 0
+Dec 10 10:51:35.939: INFO: coredns-56b78b5b9c-vvgnk from kube-system started at 2019-12-10 09:38:10 +0000 UTC (1 container statuses recorded)
+Dec 10 10:51:35.939: INFO: 	Container coredns ready: true, restart count 0
+Dec 10 10:51:35.939: INFO: smokeping-jw5wv from kube-system started at 2019-12-09 02:46:32 +0000 UTC (1 container statuses recorded)
+Dec 10 10:51:35.939: INFO: 	Container smokeping ready: true, restart count 2
+Dec 10 10:51:35.939: INFO: sonobuoy-systemd-logs-daemon-set-ea02895db0f74bf1-vczr4 from sonobuoy started at 2019-12-10 09:57:15 +0000 UTC (2 container statuses recorded)
+Dec 10 10:51:35.939: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Dec 10 10:51:35.939: INFO: 	Container systemd-logs ready: true, restart count 0
+Dec 10 10:51:35.939: INFO: dce-system-dnsservice-868586b8dd-glqkf from dce-system started at 2019-12-10 09:28:56 +0000 UTC (1 container statuses recorded)
+Dec 10 10:51:35.939: INFO: 	Container dce-system-dnsservice ready: true, restart count 0
+Dec 10 10:51:35.939: INFO: 
+Logging pods the kubelet thinks is on node dce83 before test
+Dec 10 10:51:35.947: INFO: calico-node-856tw from kube-system started at 2019-12-09 02:46:26 +0000 UTC (2 container statuses recorded)
+Dec 10 10:51:35.947: INFO: 	Container calico-node ready: true, restart count 3
+Dec 10 10:51:35.947: INFO: 	Container install-cni ready: true, restart count 3
+Dec 10 10:51:35.947: INFO: coredns-56b78b5b9c-629w2 from kube-system started at 2019-12-10 09:38:10 +0000 UTC (1 container statuses recorded)
+Dec 10 10:51:35.947: INFO: 	Container coredns ready: true, restart count 0
+Dec 10 10:51:35.947: INFO: sonobuoy-systemd-logs-daemon-set-ea02895db0f74bf1-9bdkz from sonobuoy started at 2019-12-10 09:57:15 +0000 UTC (2 container statuses recorded)
+Dec 10 10:51:35.947: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Dec 10 10:51:35.947: INFO: 	Container systemd-logs ready: true, restart count 0
+Dec 10 10:51:35.947: INFO: smokeping-xkkch from kube-system started at 2019-12-09 02:46:26 +0000 UTC (1 container statuses recorded)
+Dec 10 10:51:35.947: INFO: 	Container smokeping ready: true, restart count 5
+Dec 10 10:51:35.947: INFO: kube-proxy-g25r8 from kube-system started at 2019-12-09 02:46:26 +0000 UTC (1 container statuses recorded)
+Dec 10 10:51:35.947: INFO: 	Container kube-proxy ready: true, restart count 5
+Dec 10 10:51:35.947: INFO: coredns-coredns-7d54967c97-22wrr from kube-system started at 2019-12-09 02:58:50 +0000 UTC (1 container statuses recorded)
+Dec 10 10:51:35.947: INFO: 	Container coredns ready: true, restart count 5
+Dec 10 10:51:35.947: INFO: node-local-dns-mqqrp from kube-system started at 2019-12-10 09:33:54 +0000 UTC (1 container statuses recorded)
+Dec 10 10:51:35.947: INFO: 	Container node-cache ready: true, restart count 0
+[It] validates that NodeSelector is respected if not matching  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Trying to schedule Pod with nonempty NodeSelector.
+STEP: Considering event: 
+Type = [Warning], Name = [restricted-pod.15defcfff983f66b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:51:36.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "sched-pred-9640" for this suite.
+Dec 10 10:51:42.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:51:43.055: INFO: namespace sched-pred-9640 deletion completed in 6.080610536s
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72
+
+• [SLOW TEST:7.284 seconds]
+[sig-scheduling] SchedulerPredicates [Serial]
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
+  validates that NodeSelector is respected if not matching  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:51:43.055: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename secrets
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-816
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating secret with name secret-test-0fc15111-d956-42f5-a627-459397436217
+STEP: Creating a pod to test consume secrets
+Dec 10 10:51:43.212: INFO: Waiting up to 5m0s for pod "pod-secrets-e8dce612-c518-487e-a72e-8a1c8c23c6a2" in namespace "secrets-816" to be "success or failure"
+Dec 10 10:51:43.216: INFO: Pod "pod-secrets-e8dce612-c518-487e-a72e-8a1c8c23c6a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.004909ms
+Dec 10 10:51:45.220: INFO: Pod "pod-secrets-e8dce612-c518-487e-a72e-8a1c8c23c6a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008628986s
+STEP: Saw pod success
+Dec 10 10:51:45.220: INFO: Pod "pod-secrets-e8dce612-c518-487e-a72e-8a1c8c23c6a2" satisfied condition "success or failure"
+Dec 10 10:51:45.223: INFO: Trying to get logs from node dce82 pod pod-secrets-e8dce612-c518-487e-a72e-8a1c8c23c6a2 container secret-volume-test: 
+STEP: delete the pod
+Dec 10 10:51:45.245: INFO: Waiting for pod pod-secrets-e8dce612-c518-487e-a72e-8a1c8c23c6a2 to disappear
+Dec 10 10:51:45.247: INFO: Pod pod-secrets-e8dce612-c518-487e-a72e-8a1c8c23c6a2 no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:51:45.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-816" for this suite.
+Dec 10 10:51:51.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:51:51.330: INFO: namespace secrets-816 deletion completed in 6.077641566s
+
+• [SLOW TEST:8.275 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
+  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
+  should be submitted and removed  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] [sig-node] Pods Extended
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:51:51.331: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename pods
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-2687
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods Set QOS Class
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
+[It] should be submitted and removed  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: creating the pod
+STEP: submitting the pod to kubernetes
+STEP: verifying QOS class is set on the pod
+[AfterEach] [k8s.io] [sig-node] Pods Extended
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:51:51.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pods-2687" for this suite.
+Dec 10 10:52:13.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:52:13.579: INFO: namespace pods-2687 deletion completed in 22.091984432s
+
+• [SLOW TEST:22.248 seconds]
+[k8s.io] [sig-node] Pods Extended
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  [k8s.io] Pods Set QOS Class
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+    should be submitted and removed  [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+S
+------------------------------
+[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
+  should execute prestop exec hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:52:13.579: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename container-lifecycle-hook
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-1870
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] when create a pod with lifecycle hook
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
+STEP: create the container to handle the HTTPGet hook request.
+[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: create the pod with lifecycle hook
+STEP: delete the pod with lifecycle hook
+Dec 10 10:52:21.777: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Dec 10 10:52:21.779: INFO: Pod pod-with-prestop-exec-hook still exists
+Dec 10 10:52:23.779: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Dec 10 10:52:23.783: INFO: Pod pod-with-prestop-exec-hook still exists
+Dec 10 10:52:25.779: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Dec 10 10:52:25.782: INFO: Pod pod-with-prestop-exec-hook still exists
+Dec 10 10:52:27.779: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Dec 10 10:52:27.783: INFO: Pod pod-with-prestop-exec-hook still exists
+Dec 10 10:52:29.779: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Dec 10 10:52:29.783: INFO: Pod pod-with-prestop-exec-hook still exists
+Dec 10 10:52:31.779: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Dec 10 10:52:31.782: INFO: Pod pod-with-prestop-exec-hook still exists
+Dec 10 10:52:33.779: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Dec 10 10:52:33.783: INFO: Pod pod-with-prestop-exec-hook still exists
+Dec 10 10:52:35.779: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Dec 10 10:52:35.783: INFO: Pod pod-with-prestop-exec-hook still exists
+Dec 10 10:52:37.779: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Dec 10 10:52:37.783: INFO: Pod pod-with-prestop-exec-hook still exists
+Dec 10 10:52:39.779: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Dec 10 10:52:39.783: INFO: Pod pod-with-prestop-exec-hook still exists
+Dec 10 10:52:41.779: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Dec 10 10:52:41.783: INFO: Pod pod-with-prestop-exec-hook still exists
+Dec 10 10:52:43.779: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Dec 10 10:52:43.783: INFO: Pod pod-with-prestop-exec-hook no longer exists
+STEP: check prestop hook
+[AfterEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:52:43.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-lifecycle-hook-1870" for this suite.
+Dec 10 10:53:05.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:53:05.876: INFO: namespace container-lifecycle-hook-1870 deletion completed in 22.083389143s
+
+• [SLOW TEST:52.297 seconds]
+[k8s.io] Container Lifecycle Hook
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  when create a pod with lifecycle hook
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
+    should execute prestop exec hook properly [NodeConformance] [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
+  should execute poststart exec hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:53:05.877: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename container-lifecycle-hook
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-4932
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] when create a pod with lifecycle hook
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
+STEP: create the container to handle the HTTPGet hook request.
+[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: create the pod with lifecycle hook
+STEP: check poststart hook
+STEP: delete the pod with lifecycle hook
+Dec 10 10:53:10.101: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Dec 10 10:53:10.104: INFO: Pod pod-with-poststart-exec-hook still exists
+Dec 10 10:53:12.104: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Dec 10 10:53:12.108: INFO: Pod pod-with-poststart-exec-hook still exists
+Dec 10 10:53:14.104: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Dec 10 10:53:14.107: INFO: Pod pod-with-poststart-exec-hook still exists
+Dec 10 10:53:16.104: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Dec 10 10:53:16.108: INFO: Pod pod-with-poststart-exec-hook still exists
+Dec 10 10:53:18.104: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Dec 10 10:53:18.154: INFO: Pod pod-with-poststart-exec-hook still exists
+Dec 10 10:53:20.104: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Dec 10 10:53:20.109: INFO: Pod pod-with-poststart-exec-hook still exists
+Dec 10 10:53:22.104: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Dec 10 10:53:22.107: INFO: Pod pod-with-poststart-exec-hook still exists
+Dec 10 10:53:24.104: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Dec 10 10:53:24.107: INFO: Pod pod-with-poststart-exec-hook still exists
+Dec 10 10:53:26.104: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Dec 10 10:53:26.107: INFO: Pod pod-with-poststart-exec-hook still exists
+Dec 10 10:53:28.104: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Dec 10 10:53:28.107: INFO: Pod pod-with-poststart-exec-hook still exists
+Dec 10 10:53:30.104: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Dec 10 10:53:30.107: INFO: Pod pod-with-poststart-exec-hook still exists
+Dec 10 10:53:32.104: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Dec 10 10:53:32.107: INFO: Pod pod-with-poststart-exec-hook no longer exists
+[AfterEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:53:32.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-lifecycle-hook-4932" for this suite.
+Dec 10 10:53:54.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:53:54.200: INFO: namespace container-lifecycle-hook-4932 deletion completed in 22.088400233s
+
+• [SLOW TEST:48.323 seconds]
+[k8s.io] Container Lifecycle Hook
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  when create a pod with lifecycle hook
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
+    should execute poststart exec hook properly [NodeConformance] [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:53:54.200: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename emptydir
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-5161
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test emptydir 0777 on tmpfs
+Dec 10 10:53:54.351: INFO: Waiting up to 5m0s for pod "pod-b408e888-c88d-4780-94b1-09e1b5c2d34c" in namespace "emptydir-5161" to be "success or failure"
+Dec 10 10:53:54.353: INFO: Pod "pod-b408e888-c88d-4780-94b1-09e1b5c2d34c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.60651ms
+Dec 10 10:53:56.357: INFO: Pod "pod-b408e888-c88d-4780-94b1-09e1b5c2d34c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005867555s
+Dec 10 10:53:58.361: INFO: Pod "pod-b408e888-c88d-4780-94b1-09e1b5c2d34c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009919305s
+STEP: Saw pod success
+Dec 10 10:53:58.361: INFO: Pod "pod-b408e888-c88d-4780-94b1-09e1b5c2d34c" satisfied condition "success or failure"
+Dec 10 10:53:58.365: INFO: Trying to get logs from node dce82 pod pod-b408e888-c88d-4780-94b1-09e1b5c2d34c container test-container: 
+STEP: delete the pod
+Dec 10 10:53:58.385: INFO: Waiting for pod pod-b408e888-c88d-4780-94b1-09e1b5c2d34c to disappear
+Dec 10 10:53:58.388: INFO: Pod pod-b408e888-c88d-4780-94b1-09e1b5c2d34c no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:53:58.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-5161" for this suite.
+Dec 10 10:54:04.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:54:04.484: INFO: namespace emptydir-5161 deletion completed in 6.092951941s
+
+• [SLOW TEST:10.284 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Subpath Atomic writer volumes 
+  should support subpaths with secret pod [LinuxOnly] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Subpath
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:54:04.485: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename subpath
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-485
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] Atomic writer volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
+STEP: Setting up data
+[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating pod pod-subpath-test-secret-xjc2
+STEP: Creating a pod to test atomic-volume-subpath
+Dec 10 10:54:04.635: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-xjc2" in namespace "subpath-485" to be "success or failure"
+Dec 10 10:54:04.638: INFO: Pod "pod-subpath-test-secret-xjc2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.011953ms
+Dec 10 10:54:06.642: INFO: Pod "pod-subpath-test-secret-xjc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006796666s
+Dec 10 10:54:08.646: INFO: Pod "pod-subpath-test-secret-xjc2": Phase="Running", Reason="", readiness=true. Elapsed: 4.010961784s
+Dec 10 10:54:10.650: INFO: Pod "pod-subpath-test-secret-xjc2": Phase="Running", Reason="", readiness=true. Elapsed: 6.015138901s
+Dec 10 10:54:12.654: INFO: Pod "pod-subpath-test-secret-xjc2": Phase="Running", Reason="", readiness=true. Elapsed: 8.018810628s
+Dec 10 10:54:14.658: INFO: Pod "pod-subpath-test-secret-xjc2": Phase="Running", Reason="", readiness=true. Elapsed: 10.022959664s
+Dec 10 10:54:16.662: INFO: Pod "pod-subpath-test-secret-xjc2": Phase="Running", Reason="", readiness=true. Elapsed: 12.027018451s
+Dec 10 10:54:18.666: INFO: Pod "pod-subpath-test-secret-xjc2": Phase="Running", Reason="", readiness=true. Elapsed: 14.030875431s
+Dec 10 10:54:20.670: INFO: Pod "pod-subpath-test-secret-xjc2": Phase="Running", Reason="", readiness=true. Elapsed: 16.035302783s
+Dec 10 10:54:22.675: INFO: Pod "pod-subpath-test-secret-xjc2": Phase="Running", Reason="", readiness=true. Elapsed: 18.039525853s
+Dec 10 10:54:24.679: INFO: Pod "pod-subpath-test-secret-xjc2": Phase="Running", Reason="", readiness=true. Elapsed: 20.044047711s
+Dec 10 10:54:26.683: INFO: Pod "pod-subpath-test-secret-xjc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.048106387s
+STEP: Saw pod success
+Dec 10 10:54:26.683: INFO: Pod "pod-subpath-test-secret-xjc2" satisfied condition "success or failure"
+Dec 10 10:54:26.686: INFO: Trying to get logs from node dce82 pod pod-subpath-test-secret-xjc2 container test-container-subpath-secret-xjc2: 
+STEP: delete the pod
+Dec 10 10:54:26.698: INFO: Waiting for pod pod-subpath-test-secret-xjc2 to disappear
+Dec 10 10:54:26.700: INFO: Pod pod-subpath-test-secret-xjc2 no longer exists
+STEP: Deleting pod pod-subpath-test-secret-xjc2
+Dec 10 10:54:26.700: INFO: Deleting pod "pod-subpath-test-secret-xjc2" in namespace "subpath-485"
+[AfterEach] [sig-storage] Subpath
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:54:26.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "subpath-485" for this suite.
+Dec 10 10:54:32.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:54:32.790: INFO: namespace subpath-485 deletion completed in 6.085300954s
+
+• [SLOW TEST:28.305 seconds]
+[sig-storage] Subpath
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
+  Atomic writer volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
+    should support subpaths with secret pod [LinuxOnly] [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSS
+------------------------------
+[k8s.io] Container Runtime blackbox test on terminated container 
+  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Container Runtime
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:54:32.790: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename container-runtime
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-8470
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: create the container
+STEP: wait for the container to reach Succeeded
+STEP: get the container status
+STEP: the container should be terminated
+STEP: the termination message should be set
+Dec 10 10:54:34.949: INFO: Expected: &{} to match Container's Termination Message:  --
+STEP: delete the container
+[AfterEach] [k8s.io] Container Runtime
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:54:34.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-runtime-8470" for this suite.
+Dec 10 10:54:40.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:54:41.042: INFO: namespace container-runtime-8470 deletion completed in 6.078281679s
+
+• [SLOW TEST:8.252 seconds]
+[k8s.io] Container Runtime
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  blackbox test
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
+    on terminated container
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
+      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
+      /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
+  should perform rolling updates and roll backs of template modifications [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:54:41.042: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename statefulset
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-7247
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
+[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
+STEP: Creating service test in namespace statefulset-7247
+[It] should perform rolling updates and roll backs of template modifications [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a new StatefulSet
+Dec 10 10:54:41.193: INFO: Found 0 stateful pods, waiting for 3
+Dec 10 10:54:51.197: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
+Dec 10 10:54:51.197: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
+Dec 10 10:54:51.197: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
+Dec 10 10:54:51.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 exec --namespace=statefulset-7247 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
+Dec 10 10:54:51.502: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
+Dec 10 10:54:51.502: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
+Dec 10 10:54:51.502: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
+
+STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
+Dec 10 10:55:01.531: INFO: Updating stateful set ss2
+STEP: Creating a new revision
+STEP: Updating Pods in reverse ordinal order
+Dec 10 10:55:11.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 exec --namespace=statefulset-7247 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Dec 10 10:55:11.776: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
+Dec 10 10:55:11.776: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
+Dec 10 10:55:11.776: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'
+
+Dec 10 10:55:21.795: INFO: Waiting for StatefulSet statefulset-7247/ss2 to complete update
+Dec 10 10:55:21.795: INFO: Waiting for Pod statefulset-7247/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
+Dec 10 10:55:21.795: INFO: Waiting for Pod statefulset-7247/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
+Dec 10 10:55:31.799: INFO: Waiting for StatefulSet statefulset-7247/ss2 to complete update
+Dec 10 10:55:31.799: INFO: Waiting for Pod statefulset-7247/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
+Dec 10 10:55:31.799: INFO: Waiting for Pod statefulset-7247/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
+Dec 10 10:55:41.800: INFO: Waiting for StatefulSet statefulset-7247/ss2 to complete update
+Dec 10 10:55:41.800: INFO: Waiting for Pod statefulset-7247/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
+Dec 10 10:55:51.801: INFO: Waiting for StatefulSet statefulset-7247/ss2 to complete update
+Dec 10 10:55:51.801: INFO: Waiting for Pod statefulset-7247/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
+STEP: Rolling back to a previous revision
+Dec 10 10:56:01.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 exec --namespace=statefulset-7247 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
+Dec 10 10:56:02.022: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
+Dec 10 10:56:02.022: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
+Dec 10 10:56:02.022: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
+
+Dec 10 10:56:12.056: INFO: Updating stateful set ss2
+STEP: Rolling back update in reverse ordinal order
+Dec 10 10:56:22.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 exec --namespace=statefulset-7247 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Dec 10 10:56:22.276: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
+Dec 10 10:56:22.276: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
+Dec 10 10:56:22.276: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'
+
+Dec 10 10:56:32.293: INFO: Waiting for StatefulSet statefulset-7247/ss2 to complete update
+Dec 10 10:56:32.293: INFO: Waiting for Pod statefulset-7247/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
+Dec 10 10:56:32.294: INFO: Waiting for Pod statefulset-7247/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
+Dec 10 10:56:42.298: INFO: Waiting for StatefulSet statefulset-7247/ss2 to complete update
+Dec 10 10:56:42.298: INFO: Waiting for Pod statefulset-7247/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
+Dec 10 10:56:42.298: INFO: Waiting for Pod statefulset-7247/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
+Dec 10 10:56:52.299: INFO: Waiting for StatefulSet statefulset-7247/ss2 to complete update
+[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
+Dec 10 10:57:02.300: INFO: Deleting all statefulset in ns statefulset-7247
+Dec 10 10:57:02.302: INFO: Scaling statefulset ss2 to 0
+Dec 10 10:57:12.315: INFO: Waiting for statefulset status.replicas updated to 0
+Dec 10 10:57:12.317: INFO: Deleting statefulset ss2
+[AfterEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:57:12.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "statefulset-7247" for this suite.
+Dec 10 10:57:18.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:57:18.415: INFO: namespace statefulset-7247 deletion completed in 6.083878874s
+
+• [SLOW TEST:157.373 seconds]
+[sig-apps] StatefulSet
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+    should perform rolling updates and roll backs of template modifications [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
+  Should recreate evicted statefulset [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:57:18.416: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename statefulset
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-1974
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
+[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
+STEP: Creating service test in namespace statefulset-1974
+[It] Should recreate evicted statefulset [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Looking for a node to schedule stateful set and pod
+STEP: Creating pod with conflicting port in namespace statefulset-1974
+STEP: Creating statefulset with conflicting port in namespace statefulset-1974
+STEP: Waiting until pod test-pod will start running in namespace statefulset-1974
+STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1974
+Dec 10 10:57:22.590: INFO: Observed stateful pod in namespace: statefulset-1974, name: ss-0, uid: 99b2a448-bf72-47a5-b28e-e3792e6ce6d6, status phase: Pending. Waiting for statefulset controller to delete.
+Dec 10 10:57:22.985: INFO: Observed stateful pod in namespace: statefulset-1974, name: ss-0, uid: 99b2a448-bf72-47a5-b28e-e3792e6ce6d6, status phase: Failed. Waiting for statefulset controller to delete.
+Dec 10 10:57:22.990: INFO: Observed stateful pod in namespace: statefulset-1974, name: ss-0, uid: 99b2a448-bf72-47a5-b28e-e3792e6ce6d6, status phase: Failed. Waiting for statefulset controller to delete.
+Dec 10 10:57:22.993: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1974
+STEP: Removing pod with conflicting port in namespace statefulset-1974
+STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1974 and will be in running state
+[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
+Dec 10 10:57:27.011: INFO: Deleting all statefulset in ns statefulset-1974
+Dec 10 10:57:27.014: INFO: Scaling statefulset ss to 0
+Dec 10 10:57:47.030: INFO: Waiting for statefulset status.replicas updated to 0
+Dec 10 10:57:47.034: INFO: Deleting statefulset ss
+[AfterEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:57:47.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "statefulset-1974" for this suite.
+Dec 10 10:57:53.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:57:53.134: INFO: namespace statefulset-1974 deletion completed in 6.085741007s
+
+• [SLOW TEST:34.718 seconds]
+[sig-apps] StatefulSet
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+    Should recreate evicted statefulset [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:57:53.134: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename secrets
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-9586
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating secret with name secret-test-edb5ec34-b65c-43f8-a4a1-ec6c27201048
+STEP: Creating a pod to test consume secrets
+Dec 10 10:57:53.288: INFO: Waiting up to 5m0s for pod "pod-secrets-8095e1ec-90ea-4be1-9130-81941ee46eca" in namespace "secrets-9586" to be "success or failure"
+Dec 10 10:57:53.291: INFO: Pod "pod-secrets-8095e1ec-90ea-4be1-9130-81941ee46eca": Phase="Pending", Reason="", readiness=false. Elapsed: 3.012325ms
+Dec 10 10:57:55.294: INFO: Pod "pod-secrets-8095e1ec-90ea-4be1-9130-81941ee46eca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006453975s
+STEP: Saw pod success
+Dec 10 10:57:55.294: INFO: Pod "pod-secrets-8095e1ec-90ea-4be1-9130-81941ee46eca" satisfied condition "success or failure"
+Dec 10 10:57:55.296: INFO: Trying to get logs from node dce82 pod pod-secrets-8095e1ec-90ea-4be1-9130-81941ee46eca container secret-volume-test: 
+STEP: delete the pod
+Dec 10 10:57:55.362: INFO: Waiting for pod pod-secrets-8095e1ec-90ea-4be1-9130-81941ee46eca to disappear
+Dec 10 10:57:55.366: INFO: Pod pod-secrets-8095e1ec-90ea-4be1-9130-81941ee46eca no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:57:55.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-9586" for this suite.
+Dec 10 10:58:01.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:58:01.459: INFO: namespace secrets-9586 deletion completed in 6.087911446s
+
+• [SLOW TEST:8.324 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
+  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSS
+------------------------------
+[k8s.io] Probing container 
+  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:58:01.459: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename container-probe
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-37
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
+[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating pod liveness-e69ecc38-31f9-497d-be07-f672f1fb7abd in namespace container-probe-37
+Dec 10 10:58:05.605: INFO: Started pod liveness-e69ecc38-31f9-497d-be07-f672f1fb7abd in namespace container-probe-37
+STEP: checking the pod's current state and verifying that restartCount is present
+Dec 10 10:58:05.608: INFO: Initial restart count of pod liveness-e69ecc38-31f9-497d-be07-f672f1fb7abd is 0
+Dec 10 10:58:23.647: INFO: Restart count of pod container-probe-37/liveness-e69ecc38-31f9-497d-be07-f672f1fb7abd is now 1 (18.038616539s elapsed)
+STEP: deleting the pod
+[AfterEach] [k8s.io] Probing container
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:58:23.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-probe-37" for this suite.
+Dec 10 10:58:29.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:58:29.739: INFO: namespace container-probe-37 deletion completed in 6.080469599s
+
+• [SLOW TEST:28.280 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl version 
+  should check is all data is printed  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:58:29.740: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename kubectl
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8290
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
+[It] should check is all data is printed  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+Dec 10 10:58:29.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 version'
+Dec 10 10:58:29.965: INFO: stderr: ""
+Dec 10 10:58:29.965: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.3\", GitCommit:\"2d3c76f9091b6bec110a5e63777c332469e0cba2\", GitTreeState:\"clean\", BuildDate:\"2019-08-19T11:13:54Z\", GoVersion:\"go1.12.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"v\", Minor:\".1\", GitVersion:\"v1.15.3\", GitCommit:\"93da878\", GitTreeState:\"clean\", BuildDate:\"2019-11-05T08:48:15Z\", GoVersion:\"go1.12.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:58:29.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-8290" for this suite.
+Dec 10 10:58:35.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:58:36.049: INFO: namespace kubectl-8290 deletion completed in 6.079251033s
+
+• [SLOW TEST:6.310 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Kubectl version
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+    should check is all data is printed  [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:58:36.049: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-136
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test downward API volume plugin
+Dec 10 10:58:36.206: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4722eb1e-d8a3-4e1e-9a09-4ed4c93ee1cc" in namespace "projected-136" to be "success or failure"
+Dec 10 10:58:36.208: INFO: Pod "downwardapi-volume-4722eb1e-d8a3-4e1e-9a09-4ed4c93ee1cc": Phase="Pending", Reason="", readiness=false. Elapsed: 1.872039ms
+Dec 10 10:58:38.212: INFO: Pod "downwardapi-volume-4722eb1e-d8a3-4e1e-9a09-4ed4c93ee1cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005480082s
+Dec 10 10:58:40.216: INFO: Pod "downwardapi-volume-4722eb1e-d8a3-4e1e-9a09-4ed4c93ee1cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010063658s
+STEP: Saw pod success
+Dec 10 10:58:40.216: INFO: Pod "downwardapi-volume-4722eb1e-d8a3-4e1e-9a09-4ed4c93ee1cc" satisfied condition "success or failure"
+Dec 10 10:58:40.220: INFO: Trying to get logs from node dce82 pod downwardapi-volume-4722eb1e-d8a3-4e1e-9a09-4ed4c93ee1cc container client-container: 
+STEP: delete the pod
+Dec 10 10:58:40.239: INFO: Waiting for pod downwardapi-volume-4722eb1e-d8a3-4e1e-9a09-4ed4c93ee1cc to disappear
+Dec 10 10:58:40.242: INFO: Pod downwardapi-volume-4722eb1e-d8a3-4e1e-9a09-4ed4c93ee1cc no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:58:40.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-136" for this suite.
+Dec 10 10:58:46.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:58:46.333: INFO: namespace projected-136 deletion completed in 6.086628276s
+
+• [SLOW TEST:10.283 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:58:46.333: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename emptydir
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-6522
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test emptydir 0777 on node default medium
+Dec 10 10:58:46.486: INFO: Waiting up to 5m0s for pod "pod-1dfb6912-e6de-439c-9f3a-870dfbc3fee9" in namespace "emptydir-6522" to be "success or failure"
+Dec 10 10:58:46.488: INFO: Pod "pod-1dfb6912-e6de-439c-9f3a-870dfbc3fee9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227806ms
+Dec 10 10:58:48.492: INFO: Pod "pod-1dfb6912-e6de-439c-9f3a-870dfbc3fee9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005910435s
+Dec 10 10:58:50.495: INFO: Pod "pod-1dfb6912-e6de-439c-9f3a-870dfbc3fee9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009160073s
+STEP: Saw pod success
+Dec 10 10:58:50.495: INFO: Pod "pod-1dfb6912-e6de-439c-9f3a-870dfbc3fee9" satisfied condition "success or failure"
+Dec 10 10:58:50.497: INFO: Trying to get logs from node dce82 pod pod-1dfb6912-e6de-439c-9f3a-870dfbc3fee9 container test-container: 
+STEP: delete the pod
+Dec 10 10:58:50.509: INFO: Waiting for pod pod-1dfb6912-e6de-439c-9f3a-870dfbc3fee9 to disappear
+Dec 10 10:58:50.511: INFO: Pod pod-1dfb6912-e6de-439c-9f3a-870dfbc3fee9 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:58:50.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-6522" for this suite.
+Dec 10 10:58:56.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:58:56.610: INFO: namespace emptydir-6522 deletion completed in 6.095798531s
+
+• [SLOW TEST:10.277 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl describe 
+  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:58:56.611: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename kubectl
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6144
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
+[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+Dec 10 10:58:56.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 create -f - --namespace=kubectl-6144'
+Dec 10 10:58:56.913: INFO: stderr: ""
+Dec 10 10:58:56.913: INFO: stdout: "replicationcontroller/redis-master created\n"
+Dec 10 10:58:56.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 create -f - --namespace=kubectl-6144'
+Dec 10 10:58:57.091: INFO: stderr: ""
+Dec 10 10:58:57.091: INFO: stdout: "service/redis-master created\n"
+STEP: Waiting for Redis master to start.
+Dec 10 10:58:58.094: INFO: Selector matched 1 pods for map[app:redis]
+Dec 10 10:58:58.094: INFO: Found 0 / 1
+Dec 10 10:58:59.095: INFO: Selector matched 1 pods for map[app:redis]
+Dec 10 10:58:59.095: INFO: Found 1 / 1
+Dec 10 10:58:59.095: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
+Dec 10 10:58:59.097: INFO: Selector matched 1 pods for map[app:redis]
+Dec 10 10:58:59.097: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
+Dec 10 10:58:59.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 describe pod redis-master-sxtq8 --namespace=kubectl-6144'
+Dec 10 10:58:59.190: INFO: stderr: ""
+Dec 10 10:58:59.190: INFO: stdout: "Name:           redis-master-sxtq8\nNamespace:      kubectl-6144\nNode:           dce82/10.6.135.82\nStart Time:     Tue, 10 Dec 2019 10:58:56 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    kubernetes.io/psp: dce-psp-allow-all\nStatus:         Running\nIP:             172.28.8.125\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://34cf5935df890c0e971ecfe66a9a34ef1819198d1dce6accb798ee67d3974525\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 10 Dec 2019 10:58:58 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-tq6rt (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-tq6rt:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-tq6rt\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  3s    default-scheduler  Successfully assigned kubectl-6144/redis-master-sxtq8 to dce82\n  Normal  Pulled     2s    kubelet, dce82     Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    1s    kubelet, dce82     Created container redis-master\n  Normal  Started    1s    kubelet, dce82     Started container redis-master\n"
+Dec 10 10:58:59.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 describe rc redis-master --namespace=kubectl-6144'
+Dec 10 10:58:59.302: INFO: stderr: ""
+Dec 10 10:58:59.302: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-6144\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  3s    replication-controller  Created pod: redis-master-sxtq8\n"
+Dec 10 10:58:59.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 describe service redis-master --namespace=kubectl-6144'
+Dec 10 10:58:59.400: INFO: stderr: ""
+Dec 10 10:58:59.400: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-6144\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.96.2.98\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         172.28.8.125:6379\nSession Affinity:  None\nEvents:            \n"
+Dec 10 10:58:59.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 describe node dce81'
+Dec 10 10:58:59.513: INFO: stderr: ""
+Dec 10 10:58:59.513: INFO: stdout: "Name:               dce81\nRoles:              master,registry\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=dce81\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\n                    node-role.kubernetes.io/registry=\nAnnotations:        node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 08 Dec 2019 10:37:30 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                  Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                  ------  -----------------                 ------------------                ------                       -------\n  DCEEngineNotReady     False   Tue, 10 Dec 2019 10:58:12 +0000   Sun, 08 Dec 2019 10:46:23 +0000   DCEEngineReady               DCE engine is posting ready status.\n  TimeNotSynchronized   False   Tue, 10 Dec 2019 10:58:12 +0000   Sun, 08 Dec 2019 10:46:23 +0000   TimeSynchronized             The time of the node is synchronized\n  MemoryPressure        False   Tue, 10 Dec 2019 10:58:15 +0000   Sun, 08 Dec 2019 10:37:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure          False   Tue, 10 Dec 2019 10:58:15 +0000   Sun, 08 Dec 2019 10:37:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure           False   Tue, 10 Dec 2019 10:58:15 +0000   Sun, 08 Dec 2019 10:37:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                 True    Tue, 10 Dec 2019 10:58:15 +0000   Sun, 08 Dec 2019 10:38:00 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  10.6.135.81\n  Hostname:    dce81\nCapacity:\n cpu:                8\n ephemeral-storage:  51175Mi\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             16267512Ki\n pods:               110\nAllocatable:\n cpu:                5340m\n ephemeral-storage:  48294789041\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             10498296Ki\n pods:               110\nSystem Info:\n Machine ID:                 86ccb2d69faf44bb8677f5bb1b9272c5\n System UUID:                38B33442-3899-FF90-A141-1FE555A56FDF\n Boot ID:                    f7688116-d172-4fab-b0b7-54d55890695b\n Kernel Version:             3.10.0-693.el7.x86_64\n OS Image:                   CentOS Linux 7 (Core)\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.6.3\n Kubelet Version:            v1.15.3\n Kube-Proxy Version:         v1.15.3\nPodCIDR:                     172.28.0.0/24\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                                       ------------  ----------  ---------------  -------------  ---\n  kube-system                calico-kube-controllers-6b7d5ffdd4-x65qw                   412m (7%)     412m (7%)   845Mi (8%)       845Mi (8%)     2d\n  kube-system                calico-node-zj8bt                                          250m (4%)     250m (4%)   500Mi (4%)       500Mi (4%)     2d\n  kube-system                dce-chart-manager-797958bcff-v2wfh                         1 (18%)       1 (18%)     1000Mi (9%)      1000Mi (9%)    2d\n  kube-system                dce-cloud-provider-manager-rtcrj                           100m (1%)     100m (1%)   100Mi (0%)       100Mi (0%)     2d\n  kube-system                dce-prometheus-698b884db7-5vrk2                            250m (4%)     500m (9%)   250Mi (2%)       500Mi (4%)     31h\n  kube-system                kube-proxy-lc4c7                                           250m (4%)     250m (4%)   500Mi (4%)       500Mi (4%)     2d\n  kube-system                node-local-dns-cv2r5                                       2 (37%)       2 (37%)     500M (4%)        500M (4%)      85m\n  kube-system                smokeping-drpdh                                            125m (2%)     125m (2%)   250Mi (2%)       250Mi (2%)     2d\n  sonobuoy                   sonobuoy-systemd-logs-daemon-set-ea02895db0f74bf1-dhl7h    0 (0%)        0 (0%)      0 (0%)           0 (0%)         61m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests          Limits\n  --------           --------          ------\n  cpu                4387m (82%)       4637m (86%)\n  memory             4112344320 (38%)  4374488320 (40%)\n  ephemeral-storage  0 (0%)            0 (0%)\nEvents:              \n"
+Dec 10 10:58:59.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 describe namespace kubectl-6144'
+Dec 10 10:58:59.603: INFO: stderr: ""
+Dec 10 10:58:59.603: INFO: stdout: "Name:         kubectl-6144\nLabels:       e2e-framework=kubectl\n              e2e-run=1e7faa35-c3f3-46a0-bfa5-98bef531e4ca\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nResource Limits\n Type       Resource  Min  Max  Default Request  Default Limit  Max Limit/Request Ratio\n ----       --------  ---  ---  ---------------  -------------  -----------------------\n Container  cpu       -    -    500m             500m           1\n Container  memory    -    -    1Gi              1Gi            1\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:58:59.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-6144" for this suite.
+Dec 10 10:59:21.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:59:21.696: INFO: namespace kubectl-6144 deletion completed in 22.087879643s
+
+• [SLOW TEST:25.085 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Kubectl describe
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Subpath Atomic writer volumes 
+  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Subpath
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:59:21.697: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename subpath
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-9196
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] Atomic writer volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
+STEP: Setting up data
+[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating pod pod-subpath-test-configmap-lvv6
+STEP: Creating a pod to test atomic-volume-subpath
+Dec 10 10:59:21.898: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lvv6" in namespace "subpath-9196" to be "success or failure"
+Dec 10 10:59:21.900: INFO: Pod "pod-subpath-test-configmap-lvv6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003966ms
+Dec 10 10:59:23.903: INFO: Pod "pod-subpath-test-configmap-lvv6": Phase="Running", Reason="", readiness=true. Elapsed: 2.004480259s
+Dec 10 10:59:25.905: INFO: Pod "pod-subpath-test-configmap-lvv6": Phase="Running", Reason="", readiness=true. Elapsed: 4.007168898s
+Dec 10 10:59:27.908: INFO: Pod "pod-subpath-test-configmap-lvv6": Phase="Running", Reason="", readiness=true. Elapsed: 6.009889616s
+Dec 10 10:59:29.911: INFO: Pod "pod-subpath-test-configmap-lvv6": Phase="Running", Reason="", readiness=true. Elapsed: 8.01297856s
+Dec 10 10:59:31.915: INFO: Pod "pod-subpath-test-configmap-lvv6": Phase="Running", Reason="", readiness=true. Elapsed: 10.016628207s
+Dec 10 10:59:33.919: INFO: Pod "pod-subpath-test-configmap-lvv6": Phase="Running", Reason="", readiness=true. Elapsed: 12.020639193s
+Dec 10 10:59:35.922: INFO: Pod "pod-subpath-test-configmap-lvv6": Phase="Running", Reason="", readiness=true. Elapsed: 14.023708898s
+Dec 10 10:59:38.077: INFO: Pod "pod-subpath-test-configmap-lvv6": Phase="Running", Reason="", readiness=true. Elapsed: 16.179219271s
+Dec 10 10:59:40.081: INFO: Pod "pod-subpath-test-configmap-lvv6": Phase="Running", Reason="", readiness=true. Elapsed: 18.183378354s
+Dec 10 10:59:42.085: INFO: Pod "pod-subpath-test-configmap-lvv6": Phase="Running", Reason="", readiness=true. Elapsed: 20.186842113s
+Dec 10 10:59:44.089: INFO: Pod "pod-subpath-test-configmap-lvv6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.190678426s
+STEP: Saw pod success
+Dec 10 10:59:44.089: INFO: Pod "pod-subpath-test-configmap-lvv6" satisfied condition "success or failure"
+Dec 10 10:59:44.092: INFO: Trying to get logs from node dce82 pod pod-subpath-test-configmap-lvv6 container test-container-subpath-configmap-lvv6: 
+STEP: delete the pod
+Dec 10 10:59:44.103: INFO: Waiting for pod pod-subpath-test-configmap-lvv6 to disappear
+Dec 10 10:59:44.106: INFO: Pod pod-subpath-test-configmap-lvv6 no longer exists
+STEP: Deleting pod pod-subpath-test-configmap-lvv6
+Dec 10 10:59:44.106: INFO: Deleting pod "pod-subpath-test-configmap-lvv6" in namespace "subpath-9196"
+[AfterEach] [sig-storage] Subpath
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:59:44.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "subpath-9196" for this suite.
+Dec 10 10:59:50.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:59:50.211: INFO: namespace subpath-9196 deletion completed in 6.100745371s
+
+• [SLOW TEST:28.515 seconds]
+[sig-storage] Subpath
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
+  Atomic writer volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
+    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Proxy server 
+  should support proxy with --port 0  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:59:50.212: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename kubectl
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2749
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
+[It] should support proxy with --port 0  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: starting the proxy server
+Dec 10 10:59:50.360: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-845205613 proxy -p 0 --disable-filter'
+STEP: curling proxy /api/ output
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 10:59:50.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-2749" for this suite.
+Dec 10 10:59:56.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 10:59:56.528: INFO: namespace kubectl-2749 deletion completed in 6.090894652s
+
+• [SLOW TEST:6.316 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Proxy server
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+    should support proxy with --port 0  [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl replace 
+  should update a single-container pod's image  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 10:59:56.528: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename kubectl
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-1686
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
+[BeforeEach] [k8s.io] Kubectl replace
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1722
+[It] should update a single-container pod's image  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: running the image docker.io/library/nginx:1.14-alpine
+Dec 10 10:59:56.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-1686'
+Dec 10 10:59:56.785: INFO: stderr: ""
+Dec 10 10:59:56.785: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
+STEP: verifying the pod e2e-test-nginx-pod is running
+STEP: verifying the pod e2e-test-nginx-pod was created
+Dec 10 11:00:01.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pod e2e-test-nginx-pod --namespace=kubectl-1686 -o json'
+Dec 10 11:00:01.912: INFO: stderr: ""
+Dec 10 11:00:01.912: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"annotations\": {\n            \"kubernetes.io/psp\": \"dce-psp-allow-all\"\n        },\n        \"creationTimestamp\": \"2019-12-10T10:59:56Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-1686\",\n        \"resourceVersion\": \"374102\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-1686/pods/e2e-test-nginx-pod\",\n        \"uid\": \"303880d6-9ab4-4c7f-9672-581131049490\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-pjk79\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"dce82\",\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-pjk79\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-pjk79\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-10T10:59:56Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-10T10:59:59Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-10T10:59:59Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-10T10:59:56Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://cf1114715124997bba3449e0663d1c4f0373136f32417a598f0e210057922bcb\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-10T10:59:58Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.6.135.82\",\n        \"phase\": \"Running\",\n        \"podIP\": \"172.28.8.72\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-10T10:59:56Z\"\n    }\n}\n"
+STEP: replace the image in the pod
+Dec 10 11:00:01.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 replace -f - --namespace=kubectl-1686'
+Dec 10 11:00:02.097: INFO: stderr: ""
+Dec 10 11:00:02.097: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
+STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
+[AfterEach] [k8s.io] Kubectl replace
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1727
+Dec 10 11:00:02.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 delete pods e2e-test-nginx-pod --namespace=kubectl-1686'
+Dec 10 11:00:04.233: INFO: stderr: ""
+Dec 10 11:00:04.233: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:00:04.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-1686" for this suite.
+Dec 10 11:00:10.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:00:10.322: INFO: namespace kubectl-1686 deletion completed in 6.085366697s
+
+• [SLOW TEST:13.794 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Kubectl replace
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+    should update a single-container pod's image  [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:00:10.323: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename emptydir
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-4677
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test emptydir 0666 on node default medium
+Dec 10 11:00:10.481: INFO: Waiting up to 5m0s for pod "pod-6c25b9d8-34ce-4035-9916-725a0eb48534" in namespace "emptydir-4677" to be "success or failure"
+Dec 10 11:00:10.483: INFO: Pod "pod-6c25b9d8-34ce-4035-9916-725a0eb48534": Phase="Pending", Reason="", readiness=false. Elapsed: 2.501489ms
+Dec 10 11:00:12.487: INFO: Pod "pod-6c25b9d8-34ce-4035-9916-725a0eb48534": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006110315s
+STEP: Saw pod success
+Dec 10 11:00:12.487: INFO: Pod "pod-6c25b9d8-34ce-4035-9916-725a0eb48534" satisfied condition "success or failure"
+Dec 10 11:00:12.489: INFO: Trying to get logs from node dce82 pod pod-6c25b9d8-34ce-4035-9916-725a0eb48534 container test-container: 
+STEP: delete the pod
+Dec 10 11:00:12.501: INFO: Waiting for pod pod-6c25b9d8-34ce-4035-9916-725a0eb48534 to disappear
+Dec 10 11:00:12.503: INFO: Pod pod-6c25b9d8-34ce-4035-9916-725a0eb48534 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:00:12.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-4677" for this suite.
+Dec 10 11:00:18.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:00:18.606: INFO: namespace emptydir-4677 deletion completed in 6.099896732s
+
+• [SLOW TEST:8.283 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide container's memory limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:00:18.606: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4980
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should provide container's memory limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test downward API volume plugin
+Dec 10 11:00:18.752: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7cbd5044-d6ff-4da3-a7df-22445fabb111" in namespace "projected-4980" to be "success or failure"
+Dec 10 11:00:18.754: INFO: Pod "downwardapi-volume-7cbd5044-d6ff-4da3-a7df-22445fabb111": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187895ms
+Dec 10 11:00:20.757: INFO: Pod "downwardapi-volume-7cbd5044-d6ff-4da3-a7df-22445fabb111": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00572453s
+STEP: Saw pod success
+Dec 10 11:00:20.757: INFO: Pod "downwardapi-volume-7cbd5044-d6ff-4da3-a7df-22445fabb111" satisfied condition "success or failure"
+Dec 10 11:00:20.760: INFO: Trying to get logs from node dce82 pod downwardapi-volume-7cbd5044-d6ff-4da3-a7df-22445fabb111 container client-container: 
+STEP: delete the pod
+Dec 10 11:00:20.779: INFO: Waiting for pod downwardapi-volume-7cbd5044-d6ff-4da3-a7df-22445fabb111 to disappear
+Dec 10 11:00:20.782: INFO: Pod downwardapi-volume-7cbd5044-d6ff-4da3-a7df-22445fabb111 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:00:20.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-4980" for this suite.
+Dec 10 11:00:26.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:00:26.982: INFO: namespace projected-4980 deletion completed in 6.196567092s
+
+• [SLOW TEST:8.376 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should provide container's memory limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:00:26.982: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3561
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating configMap with name projected-configmap-test-volume-569afec4-0dc7-4870-9e99-83e24a22d16e
+STEP: Creating a pod to test consume configMaps
+Dec 10 11:00:27.143: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-71e5e0d8-0382-4e09-8034-dc30ae879547" in namespace "projected-3561" to be "success or failure"
+Dec 10 11:00:27.197: INFO: Pod "pod-projected-configmaps-71e5e0d8-0382-4e09-8034-dc30ae879547": Phase="Pending", Reason="", readiness=false. Elapsed: 54.104121ms
+Dec 10 11:00:29.200: INFO: Pod "pod-projected-configmaps-71e5e0d8-0382-4e09-8034-dc30ae879547": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.057008582s
+STEP: Saw pod success
+Dec 10 11:00:29.200: INFO: Pod "pod-projected-configmaps-71e5e0d8-0382-4e09-8034-dc30ae879547" satisfied condition "success or failure"
+Dec 10 11:00:29.202: INFO: Trying to get logs from node dce82 pod pod-projected-configmaps-71e5e0d8-0382-4e09-8034-dc30ae879547 container projected-configmap-volume-test: 
+STEP: delete the pod
+Dec 10 11:00:29.215: INFO: Waiting for pod pod-projected-configmaps-71e5e0d8-0382-4e09-8034-dc30ae879547 to disappear
+Dec 10 11:00:29.217: INFO: Pod pod-projected-configmaps-71e5e0d8-0382-4e09-8034-dc30ae879547 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:00:29.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-3561" for this suite.
+Dec 10 11:00:35.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:00:35.309: INFO: namespace projected-3561 deletion completed in 6.089457345s
+
+• [SLOW TEST:8.327 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
+  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Probing container 
+  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:00:35.310: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename container-probe
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-4948
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
+[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[AfterEach] [k8s.io] Probing container
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:01:35.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-probe-4948" for this suite.
+Dec 10 11:01:57.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:01:57.559: INFO: namespace container-probe-4948 deletion completed in 22.089634644s
+
+• [SLOW TEST:82.249 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Probing container 
+  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:01:57.559: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename container-probe
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-3894
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
+[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating pod busybox-53cbc160-a13a-4f67-9908-3e9c6473c31a in namespace container-probe-3894
+Dec 10 11:02:01.725: INFO: Started pod busybox-53cbc160-a13a-4f67-9908-3e9c6473c31a in namespace container-probe-3894
+STEP: checking the pod's current state and verifying that restartCount is present
+Dec 10 11:02:01.728: INFO: Initial restart count of pod busybox-53cbc160-a13a-4f67-9908-3e9c6473c31a is 0
+STEP: deleting the pod
+[AfterEach] [k8s.io] Probing container
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:06:02.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-probe-3894" for this suite.
+Dec 10 11:06:08.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:06:08.386: INFO: namespace container-probe-3894 deletion completed in 6.086031111s
+
+• [SLOW TEST:250.827 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should orphan pods created by rc if delete options say so [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:06:08.386: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename gc
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-4628
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should orphan pods created by rc if delete options say so [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: create the rc
+STEP: delete the rc
+STEP: wait for the rc to be deleted
+STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
+STEP: Gathering metrics
+Dec 10 11:06:48.648: INFO: For apiserver_request_total:
+For apiserver_request_latencies_summary:
+For apiserver_init_events_total:
+For garbage_collector_attempt_to_delete_queue_latency:
+For garbage_collector_attempt_to_delete_work_duration:
+For garbage_collector_attempt_to_orphan_queue_latency:
+For garbage_collector_attempt_to_orphan_work_duration:
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+For garbage_collector_graph_changes_work_duration:
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+For namespace_queue_latency_sum:
+For namespace_queue_latency_count:
+For namespace_retries:
+For namespace_work_duration:
+For namespace_work_duration_sum:
+For namespace_work_duration_count:
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:06:48.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+W1210 11:06:48.647924      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
+STEP: Destroying namespace "gc-4628" for this suite.
+Dec 10 11:06:54.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:06:54.780: INFO: namespace gc-4628 deletion completed in 6.128558648s
+
+• [SLOW TEST:46.394 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should orphan pods created by rc if delete options say so [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Secrets 
+  should be consumable via the environment [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-api-machinery] Secrets
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:06:54.781: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename secrets
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-8161
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable via the environment [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: creating secret secrets-8161/secret-test-66dbd9ed-4117-47e3-86f3-3c9f54d9df77
+STEP: Creating a pod to test consume secrets
+Dec 10 11:06:54.948: INFO: Waiting up to 5m0s for pod "pod-configmaps-aa95e6f6-1c86-4557-8b98-ff9a309e7a9f" in namespace "secrets-8161" to be "success or failure"
+Dec 10 11:06:54.951: INFO: Pod "pod-configmaps-aa95e6f6-1c86-4557-8b98-ff9a309e7a9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.464768ms
+Dec 10 11:06:56.955: INFO: Pod "pod-configmaps-aa95e6f6-1c86-4557-8b98-ff9a309e7a9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006778954s
+STEP: Saw pod success
+Dec 10 11:06:56.955: INFO: Pod "pod-configmaps-aa95e6f6-1c86-4557-8b98-ff9a309e7a9f" satisfied condition "success or failure"
+Dec 10 11:06:56.958: INFO: Trying to get logs from node dce82 pod pod-configmaps-aa95e6f6-1c86-4557-8b98-ff9a309e7a9f container env-test: 
+STEP: delete the pod
+Dec 10 11:06:56.977: INFO: Waiting for pod pod-configmaps-aa95e6f6-1c86-4557-8b98-ff9a309e7a9f to disappear
+Dec 10 11:06:56.980: INFO: Pod pod-configmaps-aa95e6f6-1c86-4557-8b98-ff9a309e7a9f no longer exists
+[AfterEach] [sig-api-machinery] Secrets
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:06:56.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-8161" for this suite.
+Dec 10 11:07:03.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:07:03.089: INFO: namespace secrets-8161 deletion completed in 6.105319263s
+
+• [SLOW TEST:8.308 seconds]
+[sig-api-machinery] Secrets
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
+  should be consumable via the environment [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
+  should create a deployment from an image  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:07:03.089: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename kubectl
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-5432
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
+[BeforeEach] [k8s.io] Kubectl run deployment
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1558
+[It] should create a deployment from an image  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: running the image docker.io/library/nginx:1.14-alpine
+Dec 10 11:07:03.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-5432'
+Dec 10 11:07:03.396: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
+Dec 10 11:07:03.396: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
+STEP: verifying the deployment e2e-test-nginx-deployment was created
+STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
+[AfterEach] [k8s.io] Kubectl run deployment
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
+Dec 10 11:07:05.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 delete deployment e2e-test-nginx-deployment --namespace=kubectl-5432'
+Dec 10 11:07:05.490: INFO: stderr: ""
+Dec 10 11:07:05.490: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:07:05.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-5432" for this suite.
+Dec 10 11:07:27.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:07:27.583: INFO: namespace kubectl-5432 deletion completed in 22.090245375s
+
+• [SLOW TEST:24.494 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Kubectl run deployment
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+    should create a deployment from an image  [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSS
+------------------------------
+[sig-apps] Deployment 
+  deployment should support rollover [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:07:27.583: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename deployment
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-5841
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
+[It] deployment should support rollover [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+Dec 10 11:07:27.731: INFO: Pod name rollover-pod: Found 0 pods out of 1
+Dec 10 11:07:32.736: INFO: Pod name rollover-pod: Found 1 pods out of 1
+STEP: ensuring each pod is running
+Dec 10 11:07:32.736: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
+Dec 10 11:07:34.740: INFO: Creating deployment "test-rollover-deployment"
+Dec 10 11:07:34.747: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
+Dec 10 11:07:36.752: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
+Dec 10 11:07:36.758: INFO: Ensure that both replica sets have 1 created replica
+Dec 10 11:07:36.763: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
+Dec 10 11:07:36.770: INFO: Updating deployment test-rollover-deployment
+Dec 10 11:07:36.770: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
+Dec 10 11:07:38.776: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
+Dec 10 11:07:38.783: INFO: Make sure deployment "test-rollover-deployment" is complete
+Dec 10 11:07:38.788: INFO: all replica sets need to contain the pod-template-hash label
+Dec 10 11:07:38.789: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572854, loc:(*time.Location)(0x7ec7a20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572854, loc:(*time.Location)(0x7ec7a20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572856, loc:(*time.Location)(0x7ec7a20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572854, loc:(*time.Location)(0x7ec7a20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Dec 10 11:07:40.794: INFO: all replica sets need to contain the pod-template-hash label
+Dec 10 11:07:40.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572854, loc:(*time.Location)(0x7ec7a20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572854, loc:(*time.Location)(0x7ec7a20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572858, loc:(*time.Location)(0x7ec7a20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572854, loc:(*time.Location)(0x7ec7a20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Dec 10 11:07:42.793: INFO: all replica sets need to contain the pod-template-hash label
+Dec 10 11:07:42.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572854, loc:(*time.Location)(0x7ec7a20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572854, loc:(*time.Location)(0x7ec7a20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572858, loc:(*time.Location)(0x7ec7a20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572854, loc:(*time.Location)(0x7ec7a20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Dec 10 11:07:44.795: INFO: all replica sets need to contain the pod-template-hash label
+Dec 10 11:07:44.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572854, loc:(*time.Location)(0x7ec7a20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572854, loc:(*time.Location)(0x7ec7a20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572858, loc:(*time.Location)(0x7ec7a20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572854, loc:(*time.Location)(0x7ec7a20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Dec 10 11:07:46.793: INFO: all replica sets need to contain the pod-template-hash label
+Dec 10 11:07:46.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572854, loc:(*time.Location)(0x7ec7a20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572854, loc:(*time.Location)(0x7ec7a20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572858, loc:(*time.Location)(0x7ec7a20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572854, loc:(*time.Location)(0x7ec7a20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Dec 10 11:07:48.799: INFO: 
+Dec 10 11:07:48.799: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572854, loc:(*time.Location)(0x7ec7a20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572854, loc:(*time.Location)(0x7ec7a20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572858, loc:(*time.Location)(0x7ec7a20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63711572854, loc:(*time.Location)(0x7ec7a20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Dec 10 11:07:50.794: INFO: 
+Dec 10 11:07:50.794: INFO: Ensure that both old replica sets have no replicas
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
+Dec 10 11:07:50.801: INFO: Deployment "test-rollover-deployment":
+&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-5841,SelfLink:/apis/apps/v1/namespaces/deployment-5841/deployments/test-rollover-deployment,UID:ca3935c9-2cd2-4ea8-bf47-b720a2042b56,ResourceVersion:375705,Generation:2,CreationTimestamp:2019-12-10 11:07:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-10 11:07:34 +0000 UTC 2019-12-10 11:07:34 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-10 11:07:48 +0000 UTC 2019-12-10 11:07:34 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}
+
+Dec 10 11:07:50.804: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
+&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-5841,SelfLink:/apis/apps/v1/namespaces/deployment-5841/replicasets/test-rollover-deployment-854595fc44,UID:f0caa36a-b66b-4881-9c6a-bdf6c01a46de,ResourceVersion:375695,Generation:2,CreationTimestamp:2019-12-10 11:07:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ca3935c9-2cd2-4ea8-bf47-b720a2042b56 0xc003044727 0xc003044728}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
+Dec 10 11:07:50.804: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
+Dec 10 11:07:50.805: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-5841,SelfLink:/apis/apps/v1/namespaces/deployment-5841/replicasets/test-rollover-controller,UID:4d143ac2-d951-4750-b8a8-dcfd5ec4fb1d,ResourceVersion:375704,Generation:2,CreationTimestamp:2019-12-10 11:07:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ca3935c9-2cd2-4ea8-bf47-b720a2042b56 0xc003044657 0xc003044658}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
+Dec 10 11:07:50.805: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-5841,SelfLink:/apis/apps/v1/namespaces/deployment-5841/replicasets/test-rollover-deployment-9b8b997cf,UID:9dc0cd96-4ba9-4821-9f8d-407adca374c4,ResourceVersion:375650,Generation:2,CreationTimestamp:2019-12-10 11:07:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ca3935c9-2cd2-4ea8-bf47-b720a2042b56 0xc0030447f0 0xc0030447f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
+Dec 10 11:07:50.807: INFO: Pod "test-rollover-deployment-854595fc44-z2lqf" is available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-z2lqf,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-5841,SelfLink:/api/v1/namespaces/deployment-5841/pods/test-rollover-deployment-854595fc44-z2lqf,UID:d6f13bc6-ee3d-4a73-b1b0-15d3b7927f13,ResourceVersion:375670,Generation:0,CreationTimestamp:2019-12-10 11:07:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 f0caa36a-b66b-4881-9c6a-bdf6c01a46de 0xc003045417 0xc003045418}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qjmz4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qjmz4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-qjmz4 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce82,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003045490} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030454b0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:07:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:07:38 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:07:38 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:07:36 +0000 UTC  }],Message:,Reason:,HostIP:10.6.135.82,PodIP:172.28.8.82,StartTime:2019-12-10 11:07:36 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-10 11:07:38 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://9f12ec0f2c8d6f6a083bcf8760aa6e4668220a47da6218b2a647a101c5df1eda}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:07:50.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "deployment-5841" for this suite.
+Dec 10 11:07:56.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:07:56.948: INFO: namespace deployment-5841 deletion completed in 6.136725067s
+
+• [SLOW TEST:29.365 seconds]
+[sig-apps] Deployment
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  deployment should support rollover [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Probing container 
+  should have monotonically increasing restart count [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:07:56.948: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename container-probe
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-6268
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
+[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating pod liveness-6eef6574-0e22-46bb-8b5c-7b9d1a91a526 in namespace container-probe-6268
+Dec 10 11:07:59.103: INFO: Started pod liveness-6eef6574-0e22-46bb-8b5c-7b9d1a91a526 in namespace container-probe-6268
+STEP: checking the pod's current state and verifying that restartCount is present
+Dec 10 11:07:59.106: INFO: Initial restart count of pod liveness-6eef6574-0e22-46bb-8b5c-7b9d1a91a526 is 0
+Dec 10 11:08:13.145: INFO: Restart count of pod container-probe-6268/liveness-6eef6574-0e22-46bb-8b5c-7b9d1a91a526 is now 1 (14.039054955s elapsed)
+Dec 10 11:08:33.181: INFO: Restart count of pod container-probe-6268/liveness-6eef6574-0e22-46bb-8b5c-7b9d1a91a526 is now 2 (34.075734422s elapsed)
+Dec 10 11:08:53.216: INFO: Restart count of pod container-probe-6268/liveness-6eef6574-0e22-46bb-8b5c-7b9d1a91a526 is now 3 (54.110791447s elapsed)
+Dec 10 11:09:13.256: INFO: Restart count of pod container-probe-6268/liveness-6eef6574-0e22-46bb-8b5c-7b9d1a91a526 is now 4 (1m14.150379891s elapsed)
+Dec 10 11:10:27.406: INFO: Restart count of pod container-probe-6268/liveness-6eef6574-0e22-46bb-8b5c-7b9d1a91a526 is now 5 (2m28.300607953s elapsed)
+STEP: deleting the pod
+[AfterEach] [k8s.io] Probing container
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:10:27.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-probe-6268" for this suite.
+Dec 10 11:10:33.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:10:33.496: INFO: namespace container-probe-6268 deletion completed in 6.079349819s
+
+• [SLOW TEST:156.548 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  should have monotonically increasing restart count [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:10:33.497: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename configmap
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-8054
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating configMap with name configmap-test-volume-a4db9ef4-1e0d-4965-812d-f0260091f4f3
+STEP: Creating a pod to test consume configMaps
+Dec 10 11:10:33.647: INFO: Waiting up to 5m0s for pod "pod-configmaps-4839f5ae-97ef-4c4b-93e2-30d06fe24c5a" in namespace "configmap-8054" to be "success or failure"
+Dec 10 11:10:33.650: INFO: Pod "pod-configmaps-4839f5ae-97ef-4c4b-93e2-30d06fe24c5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.598869ms
+Dec 10 11:10:35.654: INFO: Pod "pod-configmaps-4839f5ae-97ef-4c4b-93e2-30d06fe24c5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00653209s
+STEP: Saw pod success
+Dec 10 11:10:35.654: INFO: Pod "pod-configmaps-4839f5ae-97ef-4c4b-93e2-30d06fe24c5a" satisfied condition "success or failure"
+Dec 10 11:10:35.657: INFO: Trying to get logs from node dce82 pod pod-configmaps-4839f5ae-97ef-4c4b-93e2-30d06fe24c5a container configmap-volume-test: 
+STEP: delete the pod
+Dec 10 11:10:35.674: INFO: Waiting for pod pod-configmaps-4839f5ae-97ef-4c4b-93e2-30d06fe24c5a to disappear
+Dec 10 11:10:35.678: INFO: Pod pod-configmaps-4839f5ae-97ef-4c4b-93e2-30d06fe24c5a no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:10:35.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-8054" for this suite.
+Dec 10 11:10:41.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:10:41.767: INFO: namespace configmap-8054 deletion completed in 6.085515742s
+
+• [SLOW TEST:8.270 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
+  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-scheduling] SchedulerPredicates [Serial] 
+  validates resource limits of pods that are allowed to run  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:10:41.767: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename sched-pred
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-4378
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
+Dec 10 11:10:41.968: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
+Dec 10 11:10:41.978: INFO: Waiting for terminating namespaces to be deleted...
+Dec 10 11:10:41.981: INFO: 
+Logging pods the kubelet thinks is on node dce81 before test
+Dec 10 11:10:41.989: INFO: node-local-dns-cv2r5 from kube-system started at 2019-12-10 09:33:54 +0000 UTC (1 container statuses recorded)
+Dec 10 11:10:41.989: INFO: 	Container node-cache ready: true, restart count 0
+Dec 10 11:10:41.989: INFO: dce-prometheus-698b884db7-5vrk2 from kube-system started at 2019-12-09 03:02:00 +0000 UTC (1 container statuses recorded)
+Dec 10 11:10:41.989: INFO: 	Container dce-prometheus ready: true, restart count 0
+Dec 10 11:10:41.989: INFO: smokeping-drpdh from kube-system started at 2019-12-08 10:38:00 +0000 UTC (1 container statuses recorded)
+Dec 10 11:10:41.989: INFO: 	Container smokeping ready: true, restart count 1
+Dec 10 11:10:41.989: INFO: calico-node-zj8bt from kube-system started at 2019-12-08 10:37:37 +0000 UTC (2 container statuses recorded)
+Dec 10 11:10:41.989: INFO: 	Container calico-node ready: true, restart count 2
+Dec 10 11:10:41.989: INFO: 	Container install-cni ready: true, restart count 2
+Dec 10 11:10:41.989: INFO: kube-proxy-lc4c7 from kube-system started at 2019-12-08 10:37:38 +0000 UTC (1 container statuses recorded)
+Dec 10 11:10:41.989: INFO: 	Container kube-proxy ready: true, restart count 2
+Dec 10 11:10:41.989: INFO: sonobuoy-systemd-logs-daemon-set-ea02895db0f74bf1-dhl7h from sonobuoy started at 2019-12-10 09:57:15 +0000 UTC (2 container statuses recorded)
+Dec 10 11:10:41.989: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Dec 10 11:10:41.989: INFO: 	Container systemd-logs ready: true, restart count 0
+Dec 10 11:10:41.989: INFO: dce-cloud-provider-manager-rtcrj from kube-system started at 2019-12-08 10:37:37 +0000 UTC (1 container statuses recorded)
+Dec 10 11:10:41.989: INFO: 	Container dce-cloud-provider ready: true, restart count 2
+Dec 10 11:10:41.989: INFO: dce-chart-manager-797958bcff-v2wfh from kube-system started at 2019-12-08 10:38:00 +0000 UTC (1 container statuses recorded)
+Dec 10 11:10:41.989: INFO: 	Container chart-manager ready: true, restart count 1
+Dec 10 11:10:41.989: INFO: calico-kube-controllers-6b7d5ffdd4-x65qw from kube-system started at 2019-12-08 10:38:02 +0000 UTC (1 container statuses recorded)
+Dec 10 11:10:41.989: INFO: 	Container calico-kube-controllers ready: true, restart count 2
+Dec 10 11:10:41.989: INFO: 
+Logging pods the kubelet thinks is on node dce82 before test
+Dec 10 11:10:41.995: INFO: sonobuoy-e2e-job-3fef55150259473e from sonobuoy started at 2019-12-10 09:57:15 +0000 UTC (2 container statuses recorded)
+Dec 10 11:10:41.995: INFO: 	Container e2e ready: true, restart count 0
+Dec 10 11:10:41.995: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Dec 10 11:10:41.995: INFO: node-local-dns-jwvds from kube-system started at 2019-12-10 09:33:54 +0000 UTC (1 container statuses recorded)
+Dec 10 11:10:41.995: INFO: 	Container node-cache ready: true, restart count 0
+Dec 10 11:10:41.995: INFO: calico-node-6bfc2 from kube-system started at 2019-12-09 02:46:32 +0000 UTC (2 container statuses recorded)
+Dec 10 11:10:41.995: INFO: 	Container calico-node ready: true, restart count 1
+Dec 10 11:10:41.995: INFO: 	Container install-cni ready: true, restart count 1
+Dec 10 11:10:41.995: INFO: kube-proxy-gdkmh from kube-system started at 2019-12-09 02:46:32 +0000 UTC (1 container statuses recorded)
+Dec 10 11:10:41.995: INFO: 	Container kube-proxy ready: true, restart count 2
+Dec 10 11:10:41.995: INFO: sonobuoy from sonobuoy started at 2019-12-10 09:57:13 +0000 UTC (1 container statuses recorded)
+Dec 10 11:10:41.995: INFO: 	Container kube-sonobuoy ready: true, restart count 0
+Dec 10 11:10:41.995: INFO: coredns-56b78b5b9c-vvgnk from kube-system started at 2019-12-10 09:38:10 +0000 UTC (1 container statuses recorded)
+Dec 10 11:10:41.995: INFO: 	Container coredns ready: true, restart count 0
+Dec 10 11:10:41.995: INFO: smokeping-jw5wv from kube-system started at 2019-12-09 02:46:32 +0000 UTC (1 container statuses recorded)
+Dec 10 11:10:41.995: INFO: 	Container smokeping ready: true, restart count 2
+Dec 10 11:10:41.995: INFO: sonobuoy-systemd-logs-daemon-set-ea02895db0f74bf1-vczr4 from sonobuoy started at 2019-12-10 09:57:15 +0000 UTC (2 container statuses recorded)
+Dec 10 11:10:41.995: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Dec 10 11:10:41.995: INFO: 	Container systemd-logs ready: true, restart count 0
+Dec 10 11:10:41.995: INFO: dce-system-dnsservice-868586b8dd-glqkf from dce-system started at 2019-12-10 09:28:56 +0000 UTC (1 container statuses recorded)
+Dec 10 11:10:41.995: INFO: 	Container dce-system-dnsservice ready: true, restart count 0
+Dec 10 11:10:41.995: INFO: 
+Logging pods the kubelet thinks is on node dce83 before test
+Dec 10 11:10:42.001: INFO: calico-node-856tw from kube-system started at 2019-12-09 02:46:26 +0000 UTC (2 container statuses recorded)
+Dec 10 11:10:42.001: INFO: 	Container calico-node ready: true, restart count 3
+Dec 10 11:10:42.001: INFO: 	Container install-cni ready: true, restart count 3
+Dec 10 11:10:42.001: INFO: coredns-56b78b5b9c-629w2 from kube-system started at 2019-12-10 09:38:10 +0000 UTC (1 container statuses recorded)
+Dec 10 11:10:42.001: INFO: 	Container coredns ready: true, restart count 0
+Dec 10 11:10:42.001: INFO: sonobuoy-systemd-logs-daemon-set-ea02895db0f74bf1-9bdkz from sonobuoy started at 2019-12-10 09:57:15 +0000 UTC (2 container statuses recorded)
+Dec 10 11:10:42.001: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Dec 10 11:10:42.001: INFO: 	Container systemd-logs ready: true, restart count 0
+Dec 10 11:10:42.001: INFO: smokeping-xkkch from kube-system started at 2019-12-09 02:46:26 +0000 UTC (1 container statuses recorded)
+Dec 10 11:10:42.001: INFO: 	Container smokeping ready: true, restart count 5
+Dec 10 11:10:42.001: INFO: kube-proxy-g25r8 from kube-system started at 2019-12-09 02:46:26 +0000 UTC (1 container statuses recorded)
+Dec 10 11:10:42.001: INFO: 	Container kube-proxy ready: true, restart count 5
+Dec 10 11:10:42.001: INFO: coredns-coredns-7d54967c97-22wrr from kube-system started at 2019-12-09 02:58:50 +0000 UTC (1 container statuses recorded)
+Dec 10 11:10:42.001: INFO: 	Container coredns ready: true, restart count 5
+Dec 10 11:10:42.001: INFO: node-local-dns-mqqrp from kube-system started at 2019-12-10 09:33:54 +0000 UTC (1 container statuses recorded)
+Dec 10 11:10:42.001: INFO: 	Container node-cache ready: true, restart count 0
+[It] validates resource limits of pods that are allowed to run  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: verifying the node has the label node dce81
+STEP: verifying the node has the label node dce82
+STEP: verifying the node has the label node dce83
+Dec 10 11:10:42.049: INFO: Pod dce-system-dnsservice-868586b8dd-glqkf requesting resource cpu=300m on Node dce82
+Dec 10 11:10:42.049: INFO: Pod calico-kube-controllers-6b7d5ffdd4-x65qw requesting resource cpu=412m on Node dce81
+Dec 10 11:10:42.049: INFO: Pod calico-node-6bfc2 requesting resource cpu=250m on Node dce82
+Dec 10 11:10:42.049: INFO: Pod calico-node-856tw requesting resource cpu=250m on Node dce83
+Dec 10 11:10:42.049: INFO: Pod calico-node-zj8bt requesting resource cpu=250m on Node dce81
+Dec 10 11:10:42.049: INFO: Pod coredns-56b78b5b9c-629w2 requesting resource cpu=1000m on Node dce83
+Dec 10 11:10:42.049: INFO: Pod coredns-56b78b5b9c-vvgnk requesting resource cpu=1000m on Node dce82
+Dec 10 11:10:42.049: INFO: Pod coredns-coredns-7d54967c97-22wrr requesting resource cpu=1000m on Node dce83
+Dec 10 11:10:42.049: INFO: Pod dce-chart-manager-797958bcff-v2wfh requesting resource cpu=1000m on Node dce81
+Dec 10 11:10:42.049: INFO: Pod dce-cloud-provider-manager-rtcrj requesting resource cpu=100m on Node dce81
+Dec 10 11:10:42.049: INFO: Pod dce-prometheus-698b884db7-5vrk2 requesting resource cpu=250m on Node dce81
+Dec 10 11:10:42.049: INFO: Pod kube-proxy-g25r8 requesting resource cpu=250m on Node dce83
+Dec 10 11:10:42.049: INFO: Pod kube-proxy-gdkmh requesting resource cpu=250m on Node dce82
+Dec 10 11:10:42.049: INFO: Pod kube-proxy-lc4c7 requesting resource cpu=250m on Node dce81
+Dec 10 11:10:42.049: INFO: Pod node-local-dns-cv2r5 requesting resource cpu=2000m on Node dce81
+Dec 10 11:10:42.049: INFO: Pod node-local-dns-jwvds requesting resource cpu=2000m on Node dce82
+Dec 10 11:10:42.049: INFO: Pod node-local-dns-mqqrp requesting resource cpu=2000m on Node dce83
+Dec 10 11:10:42.049: INFO: Pod smokeping-drpdh requesting resource cpu=125m on Node dce81
+Dec 10 11:10:42.049: INFO: Pod smokeping-jw5wv requesting resource cpu=125m on Node dce82
+Dec 10 11:10:42.049: INFO: Pod smokeping-xkkch requesting resource cpu=125m on Node dce83
+Dec 10 11:10:42.049: INFO: Pod sonobuoy requesting resource cpu=0m on Node dce82
+Dec 10 11:10:42.049: INFO: Pod sonobuoy-e2e-job-3fef55150259473e requesting resource cpu=0m on Node dce82
+Dec 10 11:10:42.049: INFO: Pod sonobuoy-systemd-logs-daemon-set-ea02895db0f74bf1-9bdkz requesting resource cpu=0m on Node dce83
+Dec 10 11:10:42.049: INFO: Pod sonobuoy-systemd-logs-daemon-set-ea02895db0f74bf1-dhl7h requesting resource cpu=0m on Node dce81
+Dec 10 11:10:42.049: INFO: Pod sonobuoy-systemd-logs-daemon-set-ea02895db0f74bf1-vczr4 requesting resource cpu=0m on Node dce82
+STEP: Starting Pods to consume most of the cluster CPU.
+STEP: Creating another pod that requires unavailable amount of CPU.
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-abaf97d6-b7f7-41a6-8c2b-87662b3a8773.15defe0ad20dfdb6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4378/filler-pod-abaf97d6-b7f7-41a6-8c2b-87662b3a8773 to dce81]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-abaf97d6-b7f7-41a6-8c2b-87662b3a8773.15defe0b216150c7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-abaf97d6-b7f7-41a6-8c2b-87662b3a8773.15defe0b342b641b], Reason = [Created], Message = [Created container filler-pod-abaf97d6-b7f7-41a6-8c2b-87662b3a8773]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-abaf97d6-b7f7-41a6-8c2b-87662b3a8773.15defe0b4229c7e7], Reason = [Started], Message = [Started container filler-pod-abaf97d6-b7f7-41a6-8c2b-87662b3a8773]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-d15c67da-a5e3-4eaa-92ed-bb5bb430943a.15defe0ad2db5893], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4378/filler-pod-d15c67da-a5e3-4eaa-92ed-bb5bb430943a to dce83]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-d15c67da-a5e3-4eaa-92ed-bb5bb430943a.15defe0b11095705], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-d15c67da-a5e3-4eaa-92ed-bb5bb430943a.15defe0b1b22bfa5], Reason = [Created], Message = [Created container filler-pod-d15c67da-a5e3-4eaa-92ed-bb5bb430943a]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-d15c67da-a5e3-4eaa-92ed-bb5bb430943a.15defe0b24cb7495], Reason = [Started], Message = [Started container filler-pod-d15c67da-a5e3-4eaa-92ed-bb5bb430943a]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-feaaab19-a75d-426f-9313-8c5909449a37.15defe0ad270f6c1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4378/filler-pod-feaaab19-a75d-426f-9313-8c5909449a37 to dce82]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-feaaab19-a75d-426f-9313-8c5909449a37.15defe0b0fd3d659], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-feaaab19-a75d-426f-9313-8c5909449a37.15defe0b184ce717], Reason = [Created], Message = [Created container filler-pod-feaaab19-a75d-426f-9313-8c5909449a37]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-feaaab19-a75d-426f-9313-8c5909449a37.15defe0b212d36d9], Reason = [Started], Message = [Started container filler-pod-feaaab19-a75d-426f-9313-8c5909449a37]
+STEP: Considering event: 
+Type = [Warning], Name = [additional-pod.15defe0bc24d8174], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 Insufficient cpu.]
+STEP: removing the label node off the node dce83
+STEP: verifying the node doesn't have the label node
+STEP: removing the label node off the node dce81
+STEP: verifying the node doesn't have the label node
+STEP: removing the label node off the node dce82
+STEP: verifying the node doesn't have the label node
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:10:47.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "sched-pred-4378" for this suite.
+Dec 10 11:10:53.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:10:53.212: INFO: namespace sched-pred-4378 deletion completed in 6.078913393s
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72
+
+• [SLOW TEST:11.445 seconds]
+[sig-scheduling] SchedulerPredicates [Serial]
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
+  validates resource limits of pods that are allowed to run  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] Daemon set [Serial] 
+  should run and stop simple daemon [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:10:53.212: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename daemonsets
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-6018
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
+[It] should run and stop simple daemon [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating simple DaemonSet "daemon-set"
+STEP: Check that daemon pods launch on every node of the cluster.
+Dec 10 11:10:53.388: INFO: Number of nodes with available pods: 0
+Dec 10 11:10:53.388: INFO: Node dce81 is running more than one daemon pod
+Dec 10 11:10:54.395: INFO: Number of nodes with available pods: 0
+Dec 10 11:10:54.395: INFO: Node dce81 is running more than one daemon pod
+Dec 10 11:10:55.395: INFO: Number of nodes with available pods: 2
+Dec 10 11:10:55.395: INFO: Node dce81 is running more than one daemon pod
+Dec 10 11:10:56.395: INFO: Number of nodes with available pods: 3
+Dec 10 11:10:56.395: INFO: Number of running nodes: 3, number of available pods: 3
+STEP: Stop a daemon pod, check that the daemon pod is revived.
+Dec 10 11:10:56.411: INFO: Number of nodes with available pods: 2
+Dec 10 11:10:56.411: INFO: Node dce83 is running more than one daemon pod
+Dec 10 11:10:57.420: INFO: Number of nodes with available pods: 2
+Dec 10 11:10:57.420: INFO: Node dce83 is running more than one daemon pod
+Dec 10 11:10:58.420: INFO: Number of nodes with available pods: 2
+Dec 10 11:10:58.420: INFO: Node dce83 is running more than one daemon pod
+Dec 10 11:10:59.419: INFO: Number of nodes with available pods: 2
+Dec 10 11:10:59.419: INFO: Node dce83 is running more than one daemon pod
+Dec 10 11:11:00.422: INFO: Number of nodes with available pods: 2
+Dec 10 11:11:00.422: INFO: Node dce83 is running more than one daemon pod
+Dec 10 11:11:01.420: INFO: Number of nodes with available pods: 3
+Dec 10 11:11:01.420: INFO: Number of running nodes: 3, number of available pods: 3
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
+STEP: Deleting DaemonSet "daemon-set"
+STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6018, will wait for the garbage collector to delete the pods
+Dec 10 11:11:01.484: INFO: Deleting DaemonSet.extensions daemon-set took: 6.335032ms
+Dec 10 11:11:01.884: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.237606ms
+Dec 10 11:11:10.787: INFO: Number of nodes with available pods: 0
+Dec 10 11:11:10.787: INFO: Number of running nodes: 0, number of available pods: 0
+Dec 10 11:11:10.789: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6018/daemonsets","resourceVersion":"376440"},"items":null}
+
+Dec 10 11:11:10.791: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6018/pods","resourceVersion":"376440"},"items":null}
+
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:11:10.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "daemonsets-6018" for this suite.
+Dec 10 11:11:16.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:11:16.889: INFO: namespace daemonsets-6018 deletion completed in 6.086986878s
+
+• [SLOW TEST:23.677 seconds]
+[sig-apps] Daemon set [Serial]
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should run and stop simple daemon [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl logs 
+  should be able to retrieve and filter logs  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:11:16.889: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename kubectl
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-4604
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
+[BeforeEach] [k8s.io] Kubectl logs
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1293
+STEP: creating an rc
+Dec 10 11:11:17.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 create -f - --namespace=kubectl-4604'
+Dec 10 11:11:17.214: INFO: stderr: ""
+Dec 10 11:11:17.214: INFO: stdout: "replicationcontroller/redis-master created\n"
+[It] should be able to retrieve and filter logs  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Waiting for Redis master to start.
+Dec 10 11:11:18.216: INFO: Selector matched 1 pods for map[app:redis]
+Dec 10 11:11:18.216: INFO: Found 0 / 1
+Dec 10 11:11:19.217: INFO: Selector matched 1 pods for map[app:redis]
+Dec 10 11:11:19.217: INFO: Found 1 / 1
+Dec 10 11:11:19.217: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
+Dec 10 11:11:19.220: INFO: Selector matched 1 pods for map[app:redis]
+Dec 10 11:11:19.220: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
+STEP: checking for a matching strings
+Dec 10 11:11:19.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 logs redis-master-zp4rn redis-master --namespace=kubectl-4604'
+Dec 10 11:11:19.316: INFO: stderr: ""
+Dec 10 11:11:19.316: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 10 Dec 11:11:18.545 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 10 Dec 11:11:18.545 # Server started, Redis version 3.2.12\n1:M 10 Dec 11:11:18.545 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 10 Dec 11:11:18.545 * The server is now ready to accept connections on port 6379\n"
+STEP: limiting log lines
+Dec 10 11:11:19.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 log redis-master-zp4rn redis-master --namespace=kubectl-4604 --tail=1'
+Dec 10 11:11:19.425: INFO: stderr: ""
+Dec 10 11:11:19.425: INFO: stdout: "1:M 10 Dec 11:11:18.545 * The server is now ready to accept connections on port 6379\n"
+STEP: limiting log bytes
+Dec 10 11:11:19.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 log redis-master-zp4rn redis-master --namespace=kubectl-4604 --limit-bytes=1'
+Dec 10 11:11:19.524: INFO: stderr: ""
+Dec 10 11:11:19.524: INFO: stdout: " "
+STEP: exposing timestamps
+Dec 10 11:11:19.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 log redis-master-zp4rn redis-master --namespace=kubectl-4604 --tail=1 --timestamps'
+Dec 10 11:11:19.629: INFO: stderr: ""
+Dec 10 11:11:19.629: INFO: stdout: "2019-12-10T11:11:18.545898146Z 1:M 10 Dec 11:11:18.545 * The server is now ready to accept connections on port 6379\n"
+STEP: restricting to a time range
+Dec 10 11:11:22.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 log redis-master-zp4rn redis-master --namespace=kubectl-4604 --since=1s'
+Dec 10 11:11:22.233: INFO: stderr: ""
+Dec 10 11:11:22.233: INFO: stdout: ""
+Dec 10 11:11:22.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 log redis-master-zp4rn redis-master --namespace=kubectl-4604 --since=24h'
+Dec 10 11:11:22.325: INFO: stderr: ""
+Dec 10 11:11:22.325: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 10 Dec 11:11:18.545 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 10 Dec 11:11:18.545 # Server started, Redis version 3.2.12\n1:M 10 Dec 11:11:18.545 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 10 Dec 11:11:18.545 * The server is now ready to accept connections on port 6379\n"
+[AfterEach] [k8s.io] Kubectl logs
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1299
+STEP: using delete to clean up resources
+Dec 10 11:11:22.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 delete --grace-period=0 --force -f - --namespace=kubectl-4604'
+Dec 10 11:11:22.408: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Dec 10 11:11:22.408: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
+Dec 10 11:11:22.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get rc,svc -l name=nginx --no-headers --namespace=kubectl-4604'
+Dec 10 11:11:22.487: INFO: stderr: "No resources found.\n"
+Dec 10 11:11:22.487: INFO: stdout: ""
+Dec 10 11:11:22.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods -l name=nginx --namespace=kubectl-4604 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
+Dec 10 11:11:22.637: INFO: stderr: ""
+Dec 10 11:11:22.637: INFO: stdout: ""
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:11:22.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-4604" for this suite.
+Dec 10 11:11:44.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:11:44.726: INFO: namespace kubectl-4604 deletion completed in 22.084204438s
+
+• [SLOW TEST:27.837 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Kubectl logs
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+    should be able to retrieve and filter logs  [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:11:44.727: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename downward-api
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-8608
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
+[It] should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test downward API volume plugin
+Dec 10 11:11:44.883: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df9eb745-64f4-4813-addd-e9b617e65d28" in namespace "downward-api-8608" to be "success or failure"
+Dec 10 11:11:44.887: INFO: Pod "downwardapi-volume-df9eb745-64f4-4813-addd-e9b617e65d28": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171352ms
+Dec 10 11:11:46.891: INFO: Pod "downwardapi-volume-df9eb745-64f4-4813-addd-e9b617e65d28": Phase="Running", Reason="", readiness=true. Elapsed: 2.00805056s
+Dec 10 11:11:48.894: INFO: Pod "downwardapi-volume-df9eb745-64f4-4813-addd-e9b617e65d28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011181933s
+STEP: Saw pod success
+Dec 10 11:11:48.894: INFO: Pod "downwardapi-volume-df9eb745-64f4-4813-addd-e9b617e65d28" satisfied condition "success or failure"
+Dec 10 11:11:48.896: INFO: Trying to get logs from node dce82 pod downwardapi-volume-df9eb745-64f4-4813-addd-e9b617e65d28 container client-container: 
+STEP: delete the pod
+Dec 10 11:11:48.907: INFO: Waiting for pod downwardapi-volume-df9eb745-64f4-4813-addd-e9b617e65d28 to disappear
+Dec 10 11:11:48.909: INFO: Pod downwardapi-volume-df9eb745-64f4-4813-addd-e9b617e65d28 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:11:48.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-8608" for this suite.
+Dec 10 11:11:54.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:11:55.005: INFO: namespace downward-api-8608 deletion completed in 6.093094371s
+
+• [SLOW TEST:10.279 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
+  should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-node] ConfigMap 
+  should fail to create ConfigMap with empty key [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-node] ConfigMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:11:55.006: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename configmap
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-6247
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should fail to create ConfigMap with empty key [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating configMap that has name configmap-test-emptyKey-83a2913f-71b2-481f-a6dd-7abc141d8304
+[AfterEach] [sig-node] ConfigMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:11:55.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-6247" for this suite.
+Dec 10 11:12:01.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:12:01.290: INFO: namespace configmap-6247 deletion completed in 6.123209844s
+
+• [SLOW TEST:6.284 seconds]
+[sig-node] ConfigMap
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
+  should fail to create ConfigMap with empty key [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should delete RS created by deployment when not orphaning [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:12:01.290: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename gc
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-75
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should delete RS created by deployment when not orphaning [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: create the deployment
+STEP: Wait for the Deployment to create new ReplicaSet
+STEP: delete the deployment
+STEP: wait for all rs to be garbage collected
+STEP: expected 0 rs, got 1 rs
+STEP: expected 0 pods, got 2 pods
+STEP: Gathering metrics
+Dec 10 11:12:02.480: INFO: For apiserver_request_total:
+For apiserver_request_latencies_summary:
+For apiserver_init_events_total:
+For garbage_collector_attempt_to_delete_queue_latency:
+For garbage_collector_attempt_to_delete_work_duration:
+For garbage_collector_attempt_to_orphan_queue_latency:
+For garbage_collector_attempt_to_orphan_work_duration:
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+For garbage_collector_graph_changes_work_duration:
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+For namespace_queue_latency_sum:
+For namespace_queue_latency_count:
+For namespace_retries:
+For namespace_work_duration:
+For namespace_work_duration_sum:
+For namespace_work_duration_count:
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+W1210 11:12:02.480640      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
+Dec 10 11:12:02.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "gc-75" for this suite.
+Dec 10 11:12:08.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:12:08.557: INFO: namespace gc-75 deletion completed in 6.074549555s
+
+• [SLOW TEST:7.267 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should delete RS created by deployment when not orphaning [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide container's cpu limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:12:08.558: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename downward-api
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-1252
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
+[It] should provide container's cpu limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test downward API volume plugin
+Dec 10 11:12:08.707: INFO: Waiting up to 5m0s for pod "downwardapi-volume-86f29278-7809-4570-890a-d35e3f426b5d" in namespace "downward-api-1252" to be "success or failure"
+Dec 10 11:12:08.709: INFO: Pod "downwardapi-volume-86f29278-7809-4570-890a-d35e3f426b5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.419214ms
+Dec 10 11:12:10.712: INFO: Pod "downwardapi-volume-86f29278-7809-4570-890a-d35e3f426b5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005126802s
+STEP: Saw pod success
+Dec 10 11:12:10.712: INFO: Pod "downwardapi-volume-86f29278-7809-4570-890a-d35e3f426b5d" satisfied condition "success or failure"
+Dec 10 11:12:10.713: INFO: Trying to get logs from node dce82 pod downwardapi-volume-86f29278-7809-4570-890a-d35e3f426b5d container client-container: 
+STEP: delete the pod
+Dec 10 11:12:10.725: INFO: Waiting for pod downwardapi-volume-86f29278-7809-4570-890a-d35e3f426b5d to disappear
+Dec 10 11:12:10.727: INFO: Pod downwardapi-volume-86f29278-7809-4570-890a-d35e3f426b5d no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:12:10.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-1252" for this suite.
+Dec 10 11:12:16.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:12:16.806: INFO: namespace downward-api-1252 deletion completed in 6.073555148s
+
+• [SLOW TEST:8.248 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
+  should provide container's cpu limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Pods 
+  should be updated [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:12:16.806: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename pods
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-9564
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
+[It] should be updated [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: creating the pod
+STEP: submitting the pod to kubernetes
+STEP: verifying the pod is in kubernetes
+STEP: updating the pod
+Dec 10 11:12:19.474: INFO: Successfully updated pod "pod-update-a06e1595-0a0f-4a9e-83b2-761974d5059a"
+STEP: verifying the updated pod is in kubernetes
+Dec 10 11:12:19.481: INFO: Pod update OK
+[AfterEach] [k8s.io] Pods
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:12:19.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pods-9564" for this suite.
+Dec 10 11:12:41.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:12:41.570: INFO: namespace pods-9564 deletion completed in 22.086376492s
+
+• [SLOW TEST:24.764 seconds]
+[k8s.io] Pods
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  should be updated [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSS
+------------------------------
+[sig-network] Networking Granular Checks: Pods 
+  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-network] Networking
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:12:41.570: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename pod-network-test
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-3775
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Performing setup for networking test in namespace pod-network-test-3775
+STEP: creating a selector
+STEP: Creating the service pods in kubernetes
+Dec 10 11:12:41.717: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
+STEP: Creating test pods
+Dec 10 11:13:05.800: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.28.194.253:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3775 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Dec 10 11:13:05.800: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+Dec 10 11:13:05.934: INFO: Found all expected endpoints: [netserver-0]
+Dec 10 11:13:05.938: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.28.104.212:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3775 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Dec 10 11:13:05.938: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+Dec 10 11:13:06.068: INFO: Found all expected endpoints: [netserver-1]
+Dec 10 11:13:06.072: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.28.8.90:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3775 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Dec 10 11:13:06.072: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+Dec 10 11:13:06.197: INFO: Found all expected endpoints: [netserver-2]
+[AfterEach] [sig-network] Networking
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:13:06.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pod-network-test-3775" for this suite.
+Dec 10 11:13:28.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:13:28.288: INFO: namespace pod-network-test-3775 deletion completed in 22.087380098s
+
+• [SLOW TEST:46.718 seconds]
+[sig-network] Networking
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
+  Granular Checks: Pods
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
+    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SS
+------------------------------
+[k8s.io] Container Runtime blackbox test on terminated container 
+  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Container Runtime
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:13:28.288: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename container-runtime
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-7954
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: create the container
+STEP: wait for the container to reach Failed
+STEP: get the container status
+STEP: the container should be terminated
+STEP: the termination message should be set
+Dec 10 11:13:30.448: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
+STEP: delete the container
+[AfterEach] [k8s.io] Container Runtime
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:13:30.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-runtime-7954" for this suite.
+Dec 10 11:13:36.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:13:36.550: INFO: namespace container-runtime-7954 deletion completed in 6.089020695s
+
+• [SLOW TEST:8.262 seconds]
+[k8s.io] Container Runtime
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  blackbox test
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
+    on terminated container
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
+      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
+      /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSS
+------------------------------
+[k8s.io] [sig-node] Events 
+  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] [sig-node] Events
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:13:36.550: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename events
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-1135
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: creating the pod
+STEP: submitting the pod to kubernetes
+STEP: verifying the pod is in kubernetes
+STEP: retrieving the pod
+Dec 10 11:13:38.720: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-7746ed5a-9448-4d03-8f94-4243e5d24c6a,GenerateName:,Namespace:events-1135,SelfLink:/api/v1/namespaces/events-1135/pods/send-events-7746ed5a-9448-4d03-8f94-4243e5d24c6a,UID:fea69154-27aa-4faf-8e1e-864ddc04cf53,ResourceVersion:377142,Generation:0,CreationTimestamp:2019-12-10 11:13:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 704150235,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rgcqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rgcqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-rgcqg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce82,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003f7a160} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003f7a180}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:13:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:13:38 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:13:38 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:13:36 +0000 UTC  }],Message:,Reason:,HostIP:10.6.135.82,PodIP:172.28.8.92,StartTime:2019-12-10 11:13:36 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-10 11:13:38 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://930b62d72805c4738c2f34c5a0bf8bd63ceaf1c99ad15214d7535859da406a44}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+
+STEP: checking for scheduler event about the pod
+Dec 10 11:13:40.724: INFO: Saw scheduler event for our pod.
+STEP: checking for kubelet event about the pod
+Dec 10 11:13:42.731: INFO: Saw kubelet event for our pod.
+STEP: deleting the pod
+[AfterEach] [k8s.io] [sig-node] Events
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:13:42.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "events-1135" for this suite.
+Dec 10 11:14:22.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:14:22.822: INFO: namespace events-1135 deletion completed in 40.080161788s
+
+• [SLOW TEST:46.272 seconds]
+[k8s.io] [sig-node] Events
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SS
+------------------------------
+[k8s.io] Probing container 
+  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:14:22.822: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename container-probe
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-2811
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
+[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+Dec 10 11:14:42.981: INFO: Container started at 2019-12-10 11:14:24 +0000 UTC, pod became ready at 2019-12-10 11:14:42 +0000 UTC
+[AfterEach] [k8s.io] Probing container
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:14:42.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-probe-2811" for this suite.
+Dec 10 11:15:04.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:15:05.059: INFO: namespace container-probe-2811 deletion completed in 22.074209349s
+
+• [SLOW TEST:42.237 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Watchers 
+  should receive events on concurrent watches in same order [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:15:05.060: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename watch
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-5760
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should receive events on concurrent watches in same order [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: starting a background goroutine to produce watch events
+STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
+[AfterEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:15:10.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "watch-5760" for this suite.
+Dec 10 11:15:16.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:15:16.844: INFO: namespace watch-5760 deletion completed in 6.164889056s
+
+• [SLOW TEST:11.785 seconds]
+[sig-api-machinery] Watchers
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should receive events on concurrent watches in same order [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSS
+------------------------------
+[sig-network] Service endpoints latency 
+  should not be very high  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-network] Service endpoints latency
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:15:16.845: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename svc-latency
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svc-latency-3089
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should not be very high  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: creating replication controller svc-latency-rc in namespace svc-latency-3089
+I1210 11:15:16.989858      19 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3089, replica count: 1
+I1210 11:15:18.040332      19 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+I1210 11:15:19.040534      19 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+I1210 11:15:20.040969      19 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+Dec 10 11:15:20.148: INFO: Created: latency-svc-9fvk8
+Dec 10 11:15:20.151: INFO: Got endpoints: latency-svc-9fvk8 [10.77697ms]
+Dec 10 11:15:20.161: INFO: Created: latency-svc-fm7km
+Dec 10 11:15:20.161: INFO: Created: latency-svc-pjh9z
+Dec 10 11:15:20.163: INFO: Got endpoints: latency-svc-pjh9z [11.368339ms]
+Dec 10 11:15:20.164: INFO: Created: latency-svc-5gclz
+Dec 10 11:15:20.164: INFO: Got endpoints: latency-svc-fm7km [12.685684ms]
+Dec 10 11:15:20.166: INFO: Got endpoints: latency-svc-5gclz [14.144079ms]
+Dec 10 11:15:20.167: INFO: Created: latency-svc-pjsd8
+Dec 10 11:15:20.171: INFO: Got endpoints: latency-svc-pjsd8 [19.079405ms]
+Dec 10 11:15:20.172: INFO: Created: latency-svc-qmmkh
+Dec 10 11:15:20.177: INFO: Created: latency-svc-x6grh
+Dec 10 11:15:20.177: INFO: Got endpoints: latency-svc-qmmkh [24.989307ms]
+Dec 10 11:15:20.179: INFO: Got endpoints: latency-svc-x6grh [26.988227ms]
+Dec 10 11:15:20.180: INFO: Created: latency-svc-4vp78
+Dec 10 11:15:20.182: INFO: Created: latency-svc-b2fhw
+Dec 10 11:15:20.184: INFO: Got endpoints: latency-svc-4vp78 [31.635924ms]
+Dec 10 11:15:20.184: INFO: Created: latency-svc-t54mt
+Dec 10 11:15:20.184: INFO: Got endpoints: latency-svc-b2fhw [32.420646ms]
+Dec 10 11:15:20.186: INFO: Created: latency-svc-rgsnm
+Dec 10 11:15:20.187: INFO: Got endpoints: latency-svc-t54mt [35.762338ms]
+Dec 10 11:15:20.189: INFO: Got endpoints: latency-svc-rgsnm [36.714195ms]
+Dec 10 11:15:20.190: INFO: Created: latency-svc-v8mtg
+Dec 10 11:15:20.192: INFO: Created: latency-svc-j7qmx
+Dec 10 11:15:20.193: INFO: Got endpoints: latency-svc-v8mtg [41.47474ms]
+Dec 10 11:15:20.194: INFO: Created: latency-svc-xmxpr
+Dec 10 11:15:20.195: INFO: Got endpoints: latency-svc-j7qmx [43.009168ms]
+Dec 10 11:15:20.197: INFO: Got endpoints: latency-svc-xmxpr [44.685494ms]
+Dec 10 11:15:20.197: INFO: Created: latency-svc-958d2
+Dec 10 11:15:20.199: INFO: Got endpoints: latency-svc-958d2 [47.049714ms]
+Dec 10 11:15:20.200: INFO: Created: latency-svc-28tkr
+Dec 10 11:15:20.227: INFO: Created: latency-svc-shppz
+Dec 10 11:15:20.228: INFO: Got endpoints: latency-svc-28tkr [76.028901ms]
+Dec 10 11:15:20.238: INFO: Got endpoints: latency-svc-shppz [41.192222ms]
+Dec 10 11:15:20.238: INFO: Created: latency-svc-nnxjm
+Dec 10 11:15:20.241: INFO: Got endpoints: latency-svc-nnxjm [78.037744ms]
+Dec 10 11:15:20.241: INFO: Created: latency-svc-th7g7
+Dec 10 11:15:20.244: INFO: Created: latency-svc-tbjnq
+Dec 10 11:15:20.246: INFO: Created: latency-svc-h8fkr
+Dec 10 11:15:20.246: INFO: Got endpoints: latency-svc-th7g7 [82.081821ms]
+Dec 10 11:15:20.247: INFO: Got endpoints: latency-svc-tbjnq [80.855313ms]
+Dec 10 11:15:20.250: INFO: Created: latency-svc-wnflj
+Dec 10 11:15:20.251: INFO: Created: latency-svc-kz8p8
+Dec 10 11:15:20.252: INFO: Got endpoints: latency-svc-h8fkr [80.868495ms]
+Dec 10 11:15:20.264: INFO: Created: latency-svc-ltcmz
+Dec 10 11:15:20.266: INFO: Got endpoints: latency-svc-wnflj [89.454684ms]
+Dec 10 11:15:20.266: INFO: Got endpoints: latency-svc-kz8p8 [87.558392ms]
+Dec 10 11:15:20.268: INFO: Created: latency-svc-tmslp
+Dec 10 11:15:20.269: INFO: Got endpoints: latency-svc-ltcmz [85.833453ms]
+Dec 10 11:15:20.272: INFO: Got endpoints: latency-svc-tmslp [87.428432ms]
+Dec 10 11:15:20.273: INFO: Created: latency-svc-5fjv4
+Dec 10 11:15:20.276: INFO: Got endpoints: latency-svc-5fjv4 [88.227616ms]
+Dec 10 11:15:20.277: INFO: Created: latency-svc-mnxpt
+Dec 10 11:15:20.281: INFO: Created: latency-svc-t6kj4
+Dec 10 11:15:20.283: INFO: Got endpoints: latency-svc-mnxpt [93.789439ms]
+Dec 10 11:15:20.284: INFO: Created: latency-svc-x6hts
+Dec 10 11:15:20.285: INFO: Got endpoints: latency-svc-t6kj4 [91.963492ms]
+Dec 10 11:15:20.287: INFO: Created: latency-svc-d2s9z
+Dec 10 11:15:20.289: INFO: Got endpoints: latency-svc-x6hts [93.592979ms]
+Dec 10 11:15:20.290: INFO: Got endpoints: latency-svc-d2s9z [90.925066ms]
+Dec 10 11:15:20.291: INFO: Created: latency-svc-99qs2
+Dec 10 11:15:20.294: INFO: Created: latency-svc-9c89h
+Dec 10 11:15:20.295: INFO: Got endpoints: latency-svc-99qs2 [67.192394ms]
+Dec 10 11:15:20.297: INFO: Got endpoints: latency-svc-9c89h [58.918223ms]
+Dec 10 11:15:20.297: INFO: Created: latency-svc-gnhrk
+Dec 10 11:15:20.300: INFO: Created: latency-svc-htvdq
+Dec 10 11:15:20.302: INFO: Got endpoints: latency-svc-gnhrk [60.837328ms]
+Dec 10 11:15:20.303: INFO: Created: latency-svc-645pv
+Dec 10 11:15:20.305: INFO: Created: latency-svc-74jxg
+Dec 10 11:15:20.307: INFO: Created: latency-svc-qj7pk
+Dec 10 11:15:20.317: INFO: Created: latency-svc-qw7tv
+Dec 10 11:15:20.322: INFO: Created: latency-svc-fvgsf
+Dec 10 11:15:20.324: INFO: Created: latency-svc-wcb4f
+Dec 10 11:15:20.327: INFO: Created: latency-svc-8hnsp
+Dec 10 11:15:20.329: INFO: Created: latency-svc-5xwkt
+Dec 10 11:15:20.341: INFO: Created: latency-svc-p6jsh
+Dec 10 11:15:20.346: INFO: Created: latency-svc-n9kz5
+Dec 10 11:15:20.358: INFO: Created: latency-svc-b7spb
+Dec 10 11:15:20.358: INFO: Created: latency-svc-rhqmf
+Dec 10 11:15:20.359: INFO: Got endpoints: latency-svc-htvdq [112.241078ms]
+Dec 10 11:15:20.361: INFO: Created: latency-svc-ddjzs
+Dec 10 11:15:20.364: INFO: Created: latency-svc-nxclq
+Dec 10 11:15:20.367: INFO: Created: latency-svc-fkx5g
+Dec 10 11:15:20.401: INFO: Got endpoints: latency-svc-645pv [154.525043ms]
+Dec 10 11:15:20.406: INFO: Created: latency-svc-nmj5q
+Dec 10 11:15:20.452: INFO: Got endpoints: latency-svc-74jxg [199.850031ms]
+Dec 10 11:15:20.457: INFO: Created: latency-svc-xbtsp
+Dec 10 11:15:20.501: INFO: Got endpoints: latency-svc-qj7pk [235.20297ms]
+Dec 10 11:15:20.506: INFO: Created: latency-svc-spfhz
+Dec 10 11:15:20.552: INFO: Got endpoints: latency-svc-qw7tv [286.18251ms]
+Dec 10 11:15:20.559: INFO: Created: latency-svc-crtmw
+Dec 10 11:15:20.603: INFO: Got endpoints: latency-svc-fvgsf [333.793834ms]
+Dec 10 11:15:20.610: INFO: Created: latency-svc-mlmgf
+Dec 10 11:15:20.653: INFO: Got endpoints: latency-svc-wcb4f [380.88164ms]
+Dec 10 11:15:20.660: INFO: Created: latency-svc-wxvqc
+Dec 10 11:15:20.701: INFO: Got endpoints: latency-svc-8hnsp [425.303466ms]
+Dec 10 11:15:20.706: INFO: Created: latency-svc-f7h8k
+Dec 10 11:15:20.751: INFO: Got endpoints: latency-svc-5xwkt [468.504024ms]
+Dec 10 11:15:20.756: INFO: Created: latency-svc-jvstx
+Dec 10 11:15:20.803: INFO: Got endpoints: latency-svc-p6jsh [517.37767ms]
+Dec 10 11:15:20.811: INFO: Created: latency-svc-pz9dg
+Dec 10 11:15:20.852: INFO: Got endpoints: latency-svc-n9kz5 [563.156599ms]
+Dec 10 11:15:20.858: INFO: Created: latency-svc-pjxlq
+Dec 10 11:15:20.901: INFO: Got endpoints: latency-svc-b7spb [611.372496ms]
+Dec 10 11:15:20.907: INFO: Created: latency-svc-9kbbc
+Dec 10 11:15:20.952: INFO: Got endpoints: latency-svc-rhqmf [656.84553ms]
+Dec 10 11:15:20.958: INFO: Created: latency-svc-7mbxw
+Dec 10 11:15:21.002: INFO: Got endpoints: latency-svc-ddjzs [704.803793ms]
+Dec 10 11:15:21.009: INFO: Created: latency-svc-mxt8q
+Dec 10 11:15:21.052: INFO: Got endpoints: latency-svc-nxclq [749.710098ms]
+Dec 10 11:15:21.060: INFO: Created: latency-svc-t6xbp
+Dec 10 11:15:21.101: INFO: Got endpoints: latency-svc-fkx5g [742.619349ms]
+Dec 10 11:15:21.106: INFO: Created: latency-svc-z7b82
+Dec 10 11:15:21.152: INFO: Got endpoints: latency-svc-nmj5q [750.988668ms]
+Dec 10 11:15:21.159: INFO: Created: latency-svc-rtvx9
+Dec 10 11:15:21.201: INFO: Got endpoints: latency-svc-xbtsp [748.809518ms]
+Dec 10 11:15:21.205: INFO: Created: latency-svc-c664q
+Dec 10 11:15:21.252: INFO: Got endpoints: latency-svc-spfhz [750.818451ms]
+Dec 10 11:15:21.261: INFO: Created: latency-svc-r75vp
+Dec 10 11:15:21.302: INFO: Got endpoints: latency-svc-crtmw [749.227297ms]
+Dec 10 11:15:21.309: INFO: Created: latency-svc-s8c64
+Dec 10 11:15:21.352: INFO: Got endpoints: latency-svc-mlmgf [748.338309ms]
+Dec 10 11:15:21.358: INFO: Created: latency-svc-v4ckc
+Dec 10 11:15:21.401: INFO: Got endpoints: latency-svc-wxvqc [748.257069ms]
+Dec 10 11:15:21.406: INFO: Created: latency-svc-zw66j
+Dec 10 11:15:21.451: INFO: Got endpoints: latency-svc-f7h8k [750.122204ms]
+Dec 10 11:15:21.458: INFO: Created: latency-svc-86dqh
+Dec 10 11:15:21.501: INFO: Got endpoints: latency-svc-jvstx [750.313943ms]
+Dec 10 11:15:21.506: INFO: Created: latency-svc-fbfbw
+Dec 10 11:15:21.551: INFO: Got endpoints: latency-svc-pz9dg [748.619676ms]
+Dec 10 11:15:21.557: INFO: Created: latency-svc-h5bxs
+Dec 10 11:15:21.602: INFO: Got endpoints: latency-svc-pjxlq [749.800784ms]
+Dec 10 11:15:21.607: INFO: Created: latency-svc-q49gc
+Dec 10 11:15:21.652: INFO: Got endpoints: latency-svc-9kbbc [750.261496ms]
+Dec 10 11:15:21.659: INFO: Created: latency-svc-xj6b4
+Dec 10 11:15:21.702: INFO: Got endpoints: latency-svc-7mbxw [749.801256ms]
+Dec 10 11:15:21.709: INFO: Created: latency-svc-9r6zf
+Dec 10 11:15:21.751: INFO: Got endpoints: latency-svc-mxt8q [749.613967ms]
+Dec 10 11:15:21.757: INFO: Created: latency-svc-h8pqf
+Dec 10 11:15:21.801: INFO: Got endpoints: latency-svc-t6xbp [749.447308ms]
+Dec 10 11:15:21.806: INFO: Created: latency-svc-5k7d2
+Dec 10 11:15:21.852: INFO: Got endpoints: latency-svc-z7b82 [750.369492ms]
+Dec 10 11:15:21.857: INFO: Created: latency-svc-vmkcw
+Dec 10 11:15:21.902: INFO: Got endpoints: latency-svc-rtvx9 [749.115296ms]
+Dec 10 11:15:21.908: INFO: Created: latency-svc-cf95v
+Dec 10 11:15:21.952: INFO: Got endpoints: latency-svc-c664q [750.791418ms]
+Dec 10 11:15:21.958: INFO: Created: latency-svc-bvp84
+Dec 10 11:15:22.001: INFO: Got endpoints: latency-svc-r75vp [749.007543ms]
+Dec 10 11:15:22.008: INFO: Created: latency-svc-8t55q
+Dec 10 11:15:22.052: INFO: Got endpoints: latency-svc-s8c64 [750.19153ms]
+Dec 10 11:15:22.071: INFO: Created: latency-svc-dnd9n
+Dec 10 11:15:22.103: INFO: Got endpoints: latency-svc-v4ckc [751.722047ms]
+Dec 10 11:15:22.112: INFO: Created: latency-svc-7b4gd
+Dec 10 11:15:22.152: INFO: Got endpoints: latency-svc-zw66j [750.414467ms]
+Dec 10 11:15:22.158: INFO: Created: latency-svc-mf685
+Dec 10 11:15:22.202: INFO: Got endpoints: latency-svc-86dqh [750.548469ms]
+Dec 10 11:15:22.207: INFO: Created: latency-svc-s75kb
+Dec 10 11:15:22.252: INFO: Got endpoints: latency-svc-fbfbw [750.848733ms]
+Dec 10 11:15:22.260: INFO: Created: latency-svc-6v5bn
+Dec 10 11:15:22.302: INFO: Got endpoints: latency-svc-h5bxs [750.82274ms]
+Dec 10 11:15:22.308: INFO: Created: latency-svc-2zg2b
+Dec 10 11:15:22.352: INFO: Got endpoints: latency-svc-q49gc [750.13173ms]
+Dec 10 11:15:22.358: INFO: Created: latency-svc-8tlrl
+Dec 10 11:15:22.401: INFO: Got endpoints: latency-svc-xj6b4 [749.402649ms]
+Dec 10 11:15:22.407: INFO: Created: latency-svc-nqwj8
+Dec 10 11:15:22.452: INFO: Got endpoints: latency-svc-9r6zf [750.181184ms]
+Dec 10 11:15:22.457: INFO: Created: latency-svc-p7ml9
+Dec 10 11:15:22.503: INFO: Got endpoints: latency-svc-h8pqf [751.122594ms]
+Dec 10 11:15:22.509: INFO: Created: latency-svc-dsq8z
+Dec 10 11:15:22.551: INFO: Got endpoints: latency-svc-5k7d2 [750.346418ms]
+Dec 10 11:15:22.558: INFO: Created: latency-svc-84z2x
+Dec 10 11:15:22.601: INFO: Got endpoints: latency-svc-vmkcw [749.372736ms]
+Dec 10 11:15:22.605: INFO: Created: latency-svc-pp2qz
+Dec 10 11:15:22.652: INFO: Got endpoints: latency-svc-cf95v [750.136625ms]
+Dec 10 11:15:22.658: INFO: Created: latency-svc-9dwwf
+Dec 10 11:15:22.701: INFO: Got endpoints: latency-svc-bvp84 [749.513036ms]
+Dec 10 11:15:22.707: INFO: Created: latency-svc-2gq5r
+Dec 10 11:15:22.752: INFO: Got endpoints: latency-svc-8t55q [750.606768ms]
+Dec 10 11:15:22.758: INFO: Created: latency-svc-sjvr9
+Dec 10 11:15:22.801: INFO: Got endpoints: latency-svc-dnd9n [749.046806ms]
+Dec 10 11:15:22.806: INFO: Created: latency-svc-zfb9g
+Dec 10 11:15:22.852: INFO: Got endpoints: latency-svc-7b4gd [748.096369ms]
+Dec 10 11:15:22.858: INFO: Created: latency-svc-7hjdq
+Dec 10 11:15:22.901: INFO: Got endpoints: latency-svc-mf685 [749.588888ms]
+Dec 10 11:15:22.906: INFO: Created: latency-svc-rkkxv
+Dec 10 11:15:22.952: INFO: Got endpoints: latency-svc-s75kb [750.272038ms]
+Dec 10 11:15:22.958: INFO: Created: latency-svc-zqmrv
+Dec 10 11:15:23.002: INFO: Got endpoints: latency-svc-6v5bn [749.225266ms]
+Dec 10 11:15:23.008: INFO: Created: latency-svc-ns8qr
+Dec 10 11:15:23.052: INFO: Got endpoints: latency-svc-2zg2b [749.876254ms]
+Dec 10 11:15:23.059: INFO: Created: latency-svc-7qm6s
+Dec 10 11:15:23.102: INFO: Got endpoints: latency-svc-8tlrl [749.779199ms]
+Dec 10 11:15:23.106: INFO: Created: latency-svc-j4s9v
+Dec 10 11:15:23.151: INFO: Got endpoints: latency-svc-nqwj8 [750.18591ms]
+Dec 10 11:15:23.158: INFO: Created: latency-svc-hvg97
+Dec 10 11:15:23.203: INFO: Got endpoints: latency-svc-p7ml9 [750.995685ms]
+Dec 10 11:15:23.210: INFO: Created: latency-svc-n7rmx
+Dec 10 11:15:23.252: INFO: Got endpoints: latency-svc-dsq8z [749.12509ms]
+Dec 10 11:15:23.258: INFO: Created: latency-svc-2j7sm
+Dec 10 11:15:23.301: INFO: Got endpoints: latency-svc-84z2x [749.594095ms]
+Dec 10 11:15:23.306: INFO: Created: latency-svc-gjbr2
+Dec 10 11:15:23.351: INFO: Got endpoints: latency-svc-pp2qz [750.235772ms]
+Dec 10 11:15:23.358: INFO: Created: latency-svc-cskng
+Dec 10 11:15:23.403: INFO: Got endpoints: latency-svc-9dwwf [750.937273ms]
+Dec 10 11:15:23.411: INFO: Created: latency-svc-f5c55
+Dec 10 11:15:23.452: INFO: Got endpoints: latency-svc-2gq5r [750.907563ms]
+Dec 10 11:15:23.459: INFO: Created: latency-svc-g25br
+Dec 10 11:15:23.501: INFO: Got endpoints: latency-svc-sjvr9 [748.938897ms]
+Dec 10 11:15:23.507: INFO: Created: latency-svc-7f82d
+Dec 10 11:15:23.552: INFO: Got endpoints: latency-svc-zfb9g [751.386475ms]
+Dec 10 11:15:23.559: INFO: Created: latency-svc-6ljks
+Dec 10 11:15:23.601: INFO: Got endpoints: latency-svc-7hjdq [749.745626ms]
+Dec 10 11:15:23.606: INFO: Created: latency-svc-8bkpj
+Dec 10 11:15:23.652: INFO: Got endpoints: latency-svc-rkkxv [750.297224ms]
+Dec 10 11:15:23.658: INFO: Created: latency-svc-nv8w2
+Dec 10 11:15:23.702: INFO: Got endpoints: latency-svc-zqmrv [749.336075ms]
+Dec 10 11:15:23.708: INFO: Created: latency-svc-md6wz
+Dec 10 11:15:23.751: INFO: Got endpoints: latency-svc-ns8qr [749.637038ms]
+Dec 10 11:15:23.758: INFO: Created: latency-svc-k8vvt
+Dec 10 11:15:23.801: INFO: Got endpoints: latency-svc-7qm6s [749.073176ms]
+Dec 10 11:15:23.808: INFO: Created: latency-svc-f8vrd
+Dec 10 11:15:23.852: INFO: Got endpoints: latency-svc-j4s9v [750.520439ms]
+Dec 10 11:15:23.858: INFO: Created: latency-svc-6s7p6
+Dec 10 11:15:23.901: INFO: Got endpoints: latency-svc-hvg97 [749.703387ms]
+Dec 10 11:15:23.907: INFO: Created: latency-svc-w5xs4
+Dec 10 11:15:23.953: INFO: Got endpoints: latency-svc-n7rmx [749.65653ms]
+Dec 10 11:15:23.960: INFO: Created: latency-svc-rs2jk
+Dec 10 11:15:24.001: INFO: Got endpoints: latency-svc-2j7sm [749.3818ms]
+Dec 10 11:15:24.008: INFO: Created: latency-svc-fgzzb
+Dec 10 11:15:24.052: INFO: Got endpoints: latency-svc-gjbr2 [751.148672ms]
+Dec 10 11:15:24.060: INFO: Created: latency-svc-kdg62
+Dec 10 11:15:24.102: INFO: Got endpoints: latency-svc-cskng [750.127897ms]
+Dec 10 11:15:24.107: INFO: Created: latency-svc-9xssp
+Dec 10 11:15:24.153: INFO: Got endpoints: latency-svc-f5c55 [750.004624ms]
+Dec 10 11:15:24.160: INFO: Created: latency-svc-56vkj
+Dec 10 11:15:24.202: INFO: Got endpoints: latency-svc-g25br [749.95178ms]
+Dec 10 11:15:24.209: INFO: Created: latency-svc-r5ppn
+Dec 10 11:15:24.252: INFO: Got endpoints: latency-svc-7f82d [750.865301ms]
+Dec 10 11:15:24.259: INFO: Created: latency-svc-zsf5x
+Dec 10 11:15:24.301: INFO: Got endpoints: latency-svc-6ljks [748.535927ms]
+Dec 10 11:15:24.306: INFO: Created: latency-svc-f7wkq
+Dec 10 11:15:24.351: INFO: Got endpoints: latency-svc-8bkpj [750.052555ms]
+Dec 10 11:15:24.358: INFO: Created: latency-svc-stfh4
+Dec 10 11:15:24.401: INFO: Got endpoints: latency-svc-nv8w2 [749.853859ms]
+Dec 10 11:15:24.406: INFO: Created: latency-svc-t98vz
+Dec 10 11:15:24.453: INFO: Got endpoints: latency-svc-md6wz [751.792257ms]
+Dec 10 11:15:24.460: INFO: Created: latency-svc-qk5zk
+Dec 10 11:15:24.501: INFO: Got endpoints: latency-svc-k8vvt [749.868958ms]
+Dec 10 11:15:24.505: INFO: Created: latency-svc-n8t6h
+Dec 10 11:15:24.552: INFO: Got endpoints: latency-svc-f8vrd [751.005712ms]
+Dec 10 11:15:24.559: INFO: Created: latency-svc-lv7p5
+Dec 10 11:15:24.601: INFO: Got endpoints: latency-svc-6s7p6 [749.137592ms]
+Dec 10 11:15:24.606: INFO: Created: latency-svc-jb4d4
+Dec 10 11:15:24.652: INFO: Got endpoints: latency-svc-w5xs4 [751.128582ms]
+Dec 10 11:15:24.658: INFO: Created: latency-svc-4jgxz
+Dec 10 11:15:24.702: INFO: Got endpoints: latency-svc-rs2jk [749.788651ms]
+Dec 10 11:15:24.710: INFO: Created: latency-svc-5dl6k
+Dec 10 11:15:24.755: INFO: Got endpoints: latency-svc-fgzzb [753.345298ms]
+Dec 10 11:15:24.771: INFO: Created: latency-svc-p8qf2
+Dec 10 11:15:24.804: INFO: Got endpoints: latency-svc-kdg62 [751.777415ms]
+Dec 10 11:15:24.809: INFO: Created: latency-svc-x75zl
+Dec 10 11:15:24.852: INFO: Got endpoints: latency-svc-9xssp [750.233651ms]
+Dec 10 11:15:24.858: INFO: Created: latency-svc-m4s4w
+Dec 10 11:15:24.901: INFO: Got endpoints: latency-svc-56vkj [748.216429ms]
+Dec 10 11:15:24.906: INFO: Created: latency-svc-w2c7x
+Dec 10 11:15:24.954: INFO: Got endpoints: latency-svc-r5ppn [751.489086ms]
+Dec 10 11:15:24.961: INFO: Created: latency-svc-6ds6f
+Dec 10 11:15:25.002: INFO: Got endpoints: latency-svc-zsf5x [750.002982ms]
+Dec 10 11:15:25.008: INFO: Created: latency-svc-4z6jc
+Dec 10 11:15:25.053: INFO: Got endpoints: latency-svc-f7wkq [751.577542ms]
+Dec 10 11:15:25.061: INFO: Created: latency-svc-4xxqp
+Dec 10 11:15:25.102: INFO: Got endpoints: latency-svc-stfh4 [750.007382ms]
+Dec 10 11:15:25.124: INFO: Created: latency-svc-fj7lb
+Dec 10 11:15:25.152: INFO: Got endpoints: latency-svc-t98vz [750.031823ms]
+Dec 10 11:15:25.157: INFO: Created: latency-svc-rm6xr
+Dec 10 11:15:25.201: INFO: Got endpoints: latency-svc-qk5zk [748.044351ms]
+Dec 10 11:15:25.207: INFO: Created: latency-svc-5bzx6
+Dec 10 11:15:25.252: INFO: Got endpoints: latency-svc-n8t6h [750.778627ms]
+Dec 10 11:15:25.259: INFO: Created: latency-svc-p9tbw
+Dec 10 11:15:25.301: INFO: Got endpoints: latency-svc-lv7p5 [748.838661ms]
+Dec 10 11:15:25.306: INFO: Created: latency-svc-d6fx6
+Dec 10 11:15:25.352: INFO: Got endpoints: latency-svc-jb4d4 [750.267979ms]
+Dec 10 11:15:25.357: INFO: Created: latency-svc-9t8z7
+Dec 10 11:15:25.401: INFO: Got endpoints: latency-svc-4jgxz [749.196834ms]
+Dec 10 11:15:25.407: INFO: Created: latency-svc-kwm45
+Dec 10 11:15:25.452: INFO: Got endpoints: latency-svc-5dl6k [749.21972ms]
+Dec 10 11:15:25.458: INFO: Created: latency-svc-mm42g
+Dec 10 11:15:25.501: INFO: Got endpoints: latency-svc-p8qf2 [746.659283ms]
+Dec 10 11:15:25.506: INFO: Created: latency-svc-zs97c
+Dec 10 11:15:25.552: INFO: Got endpoints: latency-svc-x75zl [747.608063ms]
+Dec 10 11:15:25.558: INFO: Created: latency-svc-lj9sf
+Dec 10 11:15:25.601: INFO: Got endpoints: latency-svc-m4s4w [749.414967ms]
+Dec 10 11:15:25.606: INFO: Created: latency-svc-8w2x5
+Dec 10 11:15:25.652: INFO: Got endpoints: latency-svc-w2c7x [750.389328ms]
+Dec 10 11:15:25.659: INFO: Created: latency-svc-x5tmn
+Dec 10 11:15:25.702: INFO: Got endpoints: latency-svc-6ds6f [748.253267ms]
+Dec 10 11:15:25.708: INFO: Created: latency-svc-78pqg
+Dec 10 11:15:25.752: INFO: Got endpoints: latency-svc-4z6jc [749.74314ms]
+Dec 10 11:15:25.757: INFO: Created: latency-svc-ptkss
+Dec 10 11:15:25.802: INFO: Got endpoints: latency-svc-4xxqp [748.688162ms]
+Dec 10 11:15:25.808: INFO: Created: latency-svc-mwbjr
+Dec 10 11:15:25.852: INFO: Got endpoints: latency-svc-fj7lb [750.916656ms]
+Dec 10 11:15:25.859: INFO: Created: latency-svc-89ppp
+Dec 10 11:15:25.901: INFO: Got endpoints: latency-svc-rm6xr [749.440355ms]
+Dec 10 11:15:25.906: INFO: Created: latency-svc-cgftk
+Dec 10 11:15:25.951: INFO: Got endpoints: latency-svc-5bzx6 [749.510484ms]
+Dec 10 11:15:25.956: INFO: Created: latency-svc-2kgld
+Dec 10 11:15:26.002: INFO: Got endpoints: latency-svc-p9tbw [749.789501ms]
+Dec 10 11:15:26.007: INFO: Created: latency-svc-xr2d7
+Dec 10 11:15:26.053: INFO: Got endpoints: latency-svc-d6fx6 [751.905059ms]
+Dec 10 11:15:26.060: INFO: Created: latency-svc-mcc7f
+Dec 10 11:15:26.102: INFO: Got endpoints: latency-svc-9t8z7 [750.28054ms]
+Dec 10 11:15:26.109: INFO: Created: latency-svc-zvdp7
+Dec 10 11:15:26.153: INFO: Got endpoints: latency-svc-kwm45 [751.306585ms]
+Dec 10 11:15:26.159: INFO: Created: latency-svc-hcq2g
+Dec 10 11:15:26.202: INFO: Got endpoints: latency-svc-mm42g [750.336969ms]
+Dec 10 11:15:26.208: INFO: Created: latency-svc-dssxr
+Dec 10 11:15:26.252: INFO: Got endpoints: latency-svc-zs97c [750.173725ms]
+Dec 10 11:15:26.257: INFO: Created: latency-svc-9bx24
+Dec 10 11:15:26.301: INFO: Got endpoints: latency-svc-lj9sf [749.391529ms]
+Dec 10 11:15:26.306: INFO: Created: latency-svc-b5x7n
+Dec 10 11:15:26.352: INFO: Got endpoints: latency-svc-8w2x5 [750.555389ms]
+Dec 10 11:15:26.359: INFO: Created: latency-svc-pwvxr
+Dec 10 11:15:26.401: INFO: Got endpoints: latency-svc-x5tmn [749.32954ms]
+Dec 10 11:15:26.406: INFO: Created: latency-svc-lgsch
+Dec 10 11:15:26.452: INFO: Got endpoints: latency-svc-78pqg [749.872451ms]
+Dec 10 11:15:26.458: INFO: Created: latency-svc-zb224
+Dec 10 11:15:26.502: INFO: Got endpoints: latency-svc-ptkss [749.791946ms]
+Dec 10 11:15:26.508: INFO: Created: latency-svc-w8t2q
+Dec 10 11:15:26.553: INFO: Got endpoints: latency-svc-mwbjr [751.437881ms]
+Dec 10 11:15:26.559: INFO: Created: latency-svc-vvvzv
+Dec 10 11:15:26.602: INFO: Got endpoints: latency-svc-89ppp [749.88124ms]
+Dec 10 11:15:26.608: INFO: Created: latency-svc-5c722
+Dec 10 11:15:26.653: INFO: Got endpoints: latency-svc-cgftk [751.456009ms]
+Dec 10 11:15:26.659: INFO: Created: latency-svc-bm4bq
+Dec 10 11:15:26.701: INFO: Got endpoints: latency-svc-2kgld [750.039592ms]
+Dec 10 11:15:26.708: INFO: Created: latency-svc-w9rv4
+Dec 10 11:15:26.752: INFO: Got endpoints: latency-svc-xr2d7 [750.130887ms]
+Dec 10 11:15:26.758: INFO: Created: latency-svc-2zpxs
+Dec 10 11:15:26.801: INFO: Got endpoints: latency-svc-mcc7f [748.050052ms]
+Dec 10 11:15:26.806: INFO: Created: latency-svc-dgsl5
+Dec 10 11:15:26.851: INFO: Got endpoints: latency-svc-zvdp7 [749.496066ms]
+Dec 10 11:15:26.857: INFO: Created: latency-svc-wshhb
+Dec 10 11:15:26.901: INFO: Got endpoints: latency-svc-hcq2g [748.561498ms]
+Dec 10 11:15:26.906: INFO: Created: latency-svc-9hkxz
+Dec 10 11:15:26.952: INFO: Got endpoints: latency-svc-dssxr [750.028583ms]
+Dec 10 11:15:26.958: INFO: Created: latency-svc-4gtt8
+Dec 10 11:15:27.006: INFO: Got endpoints: latency-svc-9bx24 [754.339827ms]
+Dec 10 11:15:27.011: INFO: Created: latency-svc-t5dc4
+Dec 10 11:15:27.051: INFO: Got endpoints: latency-svc-b5x7n [749.905235ms]
+Dec 10 11:15:27.056: INFO: Created: latency-svc-c6hrd
+Dec 10 11:15:27.102: INFO: Got endpoints: latency-svc-pwvxr [749.781574ms]
+Dec 10 11:15:27.108: INFO: Created: latency-svc-ffd2q
+Dec 10 11:15:27.152: INFO: Got endpoints: latency-svc-lgsch [750.640779ms]
+Dec 10 11:15:27.158: INFO: Created: latency-svc-5cmrc
+Dec 10 11:15:27.202: INFO: Got endpoints: latency-svc-zb224 [750.101373ms]
+Dec 10 11:15:27.207: INFO: Created: latency-svc-sm4cv
+Dec 10 11:15:27.251: INFO: Got endpoints: latency-svc-w8t2q [749.442882ms]
+Dec 10 11:15:27.258: INFO: Created: latency-svc-hwhfj
+Dec 10 11:15:27.301: INFO: Got endpoints: latency-svc-vvvzv [748.197668ms]
+Dec 10 11:15:27.308: INFO: Created: latency-svc-ln2gk
+Dec 10 11:15:27.352: INFO: Got endpoints: latency-svc-5c722 [749.091665ms]
+Dec 10 11:15:27.358: INFO: Created: latency-svc-m84dc
+Dec 10 11:15:27.402: INFO: Got endpoints: latency-svc-bm4bq [749.190788ms]
+Dec 10 11:15:27.413: INFO: Created: latency-svc-9zvg2
+Dec 10 11:15:27.452: INFO: Got endpoints: latency-svc-w9rv4 [751.044994ms]
+Dec 10 11:15:27.459: INFO: Created: latency-svc-d8ggd
+Dec 10 11:15:27.502: INFO: Got endpoints: latency-svc-2zpxs [749.780994ms]
+Dec 10 11:15:27.507: INFO: Created: latency-svc-k922x
+Dec 10 11:15:27.553: INFO: Got endpoints: latency-svc-dgsl5 [751.753334ms]
+Dec 10 11:15:27.559: INFO: Created: latency-svc-j2vbn
+Dec 10 11:15:27.602: INFO: Got endpoints: latency-svc-wshhb [750.077406ms]
+Dec 10 11:15:27.607: INFO: Created: latency-svc-9nvxc
+Dec 10 11:15:27.652: INFO: Got endpoints: latency-svc-9hkxz [750.742822ms]
+Dec 10 11:15:27.660: INFO: Created: latency-svc-wqh58
+Dec 10 11:15:27.702: INFO: Got endpoints: latency-svc-4gtt8 [749.218141ms]
+Dec 10 11:15:27.715: INFO: Created: latency-svc-wbj8x
+Dec 10 11:15:27.752: INFO: Got endpoints: latency-svc-t5dc4 [745.552861ms]
+Dec 10 11:15:27.757: INFO: Created: latency-svc-mnslp
+Dec 10 11:15:27.801: INFO: Got endpoints: latency-svc-c6hrd [750.21275ms]
+Dec 10 11:15:27.807: INFO: Created: latency-svc-hwfzw
+Dec 10 11:15:27.853: INFO: Got endpoints: latency-svc-ffd2q [750.894866ms]
+Dec 10 11:15:27.859: INFO: Created: latency-svc-xk2ww
+Dec 10 11:15:27.901: INFO: Got endpoints: latency-svc-5cmrc [749.435721ms]
+Dec 10 11:15:27.907: INFO: Created: latency-svc-52jhw
+Dec 10 11:15:27.952: INFO: Got endpoints: latency-svc-sm4cv [750.462434ms]
+Dec 10 11:15:27.959: INFO: Created: latency-svc-d8lkl
+Dec 10 11:15:28.001: INFO: Got endpoints: latency-svc-hwhfj [749.893589ms]
+Dec 10 11:15:28.052: INFO: Got endpoints: latency-svc-ln2gk [750.884552ms]
+Dec 10 11:15:28.100: INFO: Got endpoints: latency-svc-m84dc [748.923929ms]
+Dec 10 11:15:28.154: INFO: Got endpoints: latency-svc-9zvg2 [752.242225ms]
+Dec 10 11:15:28.201: INFO: Got endpoints: latency-svc-d8ggd [749.284169ms]
+Dec 10 11:15:28.252: INFO: Got endpoints: latency-svc-k922x [750.0708ms]
+Dec 10 11:15:28.302: INFO: Got endpoints: latency-svc-j2vbn [749.162307ms]
+Dec 10 11:15:28.353: INFO: Got endpoints: latency-svc-9nvxc [751.330327ms]
+Dec 10 11:15:28.401: INFO: Got endpoints: latency-svc-wqh58 [749.121673ms]
+Dec 10 11:15:28.453: INFO: Got endpoints: latency-svc-wbj8x [751.022914ms]
+Dec 10 11:15:28.502: INFO: Got endpoints: latency-svc-mnslp [750.06363ms]
+Dec 10 11:15:28.553: INFO: Got endpoints: latency-svc-hwfzw [751.979522ms]
+Dec 10 11:15:28.602: INFO: Got endpoints: latency-svc-xk2ww [749.15646ms]
+Dec 10 11:15:28.655: INFO: Got endpoints: latency-svc-52jhw [754.204133ms]
+Dec 10 11:15:28.702: INFO: Got endpoints: latency-svc-d8lkl [749.450366ms]
+Dec 10 11:15:28.702: INFO: Latencies: [11.368339ms 12.685684ms 14.144079ms 19.079405ms 24.989307ms 26.988227ms 31.635924ms 32.420646ms 35.762338ms 36.714195ms 41.192222ms 41.47474ms 43.009168ms 44.685494ms 47.049714ms 58.918223ms 60.837328ms 67.192394ms 76.028901ms 78.037744ms 80.855313ms 80.868495ms 82.081821ms 85.833453ms 87.428432ms 87.558392ms 88.227616ms 89.454684ms 90.925066ms 91.963492ms 93.592979ms 93.789439ms 112.241078ms 154.525043ms 199.850031ms 235.20297ms 286.18251ms 333.793834ms 380.88164ms 425.303466ms 468.504024ms 517.37767ms 563.156599ms 611.372496ms 656.84553ms 704.803793ms 742.619349ms 745.552861ms 746.659283ms 747.608063ms 748.044351ms 748.050052ms 748.096369ms 748.197668ms 748.216429ms 748.253267ms 748.257069ms 748.338309ms 748.535927ms 748.561498ms 748.619676ms 748.688162ms 748.809518ms 748.838661ms 748.923929ms 748.938897ms 749.007543ms 749.046806ms 749.073176ms 749.091665ms 749.115296ms 749.121673ms 749.12509ms 749.137592ms 749.15646ms 749.162307ms 749.190788ms 749.196834ms 749.218141ms 749.21972ms 749.225266ms 749.227297ms 749.284169ms 749.32954ms 749.336075ms 749.372736ms 749.3818ms 749.391529ms 749.402649ms 749.414967ms 749.435721ms 749.440355ms 749.442882ms 749.447308ms 749.450366ms 749.496066ms 749.510484ms 749.513036ms 749.588888ms 749.594095ms 749.613967ms 749.637038ms 749.65653ms 749.703387ms 749.710098ms 749.74314ms 749.745626ms 749.779199ms 749.780994ms 749.781574ms 749.788651ms 749.789501ms 749.791946ms 749.800784ms 749.801256ms 749.853859ms 749.868958ms 749.872451ms 749.876254ms 749.88124ms 749.893589ms 749.905235ms 749.95178ms 750.002982ms 750.004624ms 750.007382ms 750.028583ms 750.031823ms 750.039592ms 750.052555ms 750.06363ms 750.0708ms 750.077406ms 750.101373ms 750.122204ms 750.127897ms 750.130887ms 750.13173ms 750.136625ms 750.173725ms 750.181184ms 750.18591ms 750.19153ms 750.21275ms 750.233651ms 750.235772ms 750.261496ms 750.267979ms 750.272038ms 750.28054ms 750.297224ms 750.313943ms 750.336969ms 750.346418ms 750.369492ms 750.389328ms 750.414467ms 750.462434ms 750.520439ms 750.548469ms 750.555389ms 750.606768ms 750.640779ms 750.742822ms 750.778627ms 750.791418ms 750.818451ms 750.82274ms 750.848733ms 750.865301ms 750.884552ms 750.894866ms 750.907563ms 750.916656ms 750.937273ms 750.988668ms 750.995685ms 751.005712ms 751.022914ms 751.044994ms 751.122594ms 751.128582ms 751.148672ms 751.306585ms 751.330327ms 751.386475ms 751.437881ms 751.456009ms 751.489086ms 751.577542ms 751.722047ms 751.753334ms 751.777415ms 751.792257ms 751.905059ms 751.979522ms 752.242225ms 753.345298ms 754.204133ms 754.339827ms]
+Dec 10 11:15:28.702: INFO: 50 %ile: 749.613967ms
+Dec 10 11:15:28.702: INFO: 90 %ile: 751.122594ms
+Dec 10 11:15:28.702: INFO: 99 %ile: 754.204133ms
+Dec 10 11:15:28.702: INFO: Total sample count: 200
+[AfterEach] [sig-network] Service endpoints latency
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:15:28.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "svc-latency-3089" for this suite.
+Dec 10 11:15:36.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:15:36.807: INFO: namespace svc-latency-3089 deletion completed in 8.101303583s
+
+• [SLOW TEST:19.962 seconds]
+[sig-network] Service endpoints latency
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should not be very high  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Container Runtime blackbox test when starting a container that exits 
+  should run with the expected status [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Container Runtime
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:15:36.807: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename container-runtime
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-89
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should run with the expected status [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
+STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
+STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
+STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
+STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
+STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
+STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
+STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
+STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
+STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
+STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
+STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
+STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
+STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
+STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
+[AfterEach] [k8s.io] Container Runtime
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:16:01.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-runtime-89" for this suite.
+Dec 10 11:16:07.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:16:07.256: INFO: namespace container-runtime-89 deletion completed in 6.093690443s
+
+• [SLOW TEST:30.449 seconds]
+[k8s.io] Container Runtime
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  blackbox test
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
+    when starting a container that exits
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
+      should run with the expected status [NodeConformance] [Conformance]
+      /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Update Demo 
+  should scale a replication controller  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:16:07.256: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename kubectl
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-3617
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
+[BeforeEach] [k8s.io] Update Demo
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
+[It] should scale a replication controller  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: creating a replication controller
+Dec 10 11:16:07.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 create -f - --namespace=kubectl-3617'
+Dec 10 11:16:07.618: INFO: stderr: ""
+Dec 10 11:16:07.618: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
+STEP: waiting for all containers in name=update-demo pods to come up.
+Dec 10 11:16:07.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3617'
+Dec 10 11:16:07.701: INFO: stderr: ""
+Dec 10 11:16:07.701: INFO: stdout: "update-demo-nautilus-pmbrd update-demo-nautilus-qvb4p "
+Dec 10 11:16:07.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-pmbrd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3617'
+Dec 10 11:16:07.788: INFO: stderr: ""
+Dec 10 11:16:07.788: INFO: stdout: ""
+Dec 10 11:16:07.788: INFO: update-demo-nautilus-pmbrd is created but not running
+Dec 10 11:16:12.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3617'
+Dec 10 11:16:12.882: INFO: stderr: ""
+Dec 10 11:16:12.882: INFO: stdout: "update-demo-nautilus-pmbrd update-demo-nautilus-qvb4p "
+Dec 10 11:16:12.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-pmbrd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3617'
+Dec 10 11:16:12.962: INFO: stderr: ""
+Dec 10 11:16:12.962: INFO: stdout: "true"
+Dec 10 11:16:12.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-pmbrd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3617'
+Dec 10 11:16:13.039: INFO: stderr: ""
+Dec 10 11:16:13.039: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Dec 10 11:16:13.039: INFO: validating pod update-demo-nautilus-pmbrd
+Dec 10 11:16:13.045: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Dec 10 11:16:13.045: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Dec 10 11:16:13.045: INFO: update-demo-nautilus-pmbrd is verified up and running
+Dec 10 11:16:13.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-qvb4p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3617'
+Dec 10 11:16:13.141: INFO: stderr: ""
+Dec 10 11:16:13.141: INFO: stdout: "true"
+Dec 10 11:16:13.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-qvb4p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3617'
+Dec 10 11:16:13.221: INFO: stderr: ""
+Dec 10 11:16:13.221: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Dec 10 11:16:13.221: INFO: validating pod update-demo-nautilus-qvb4p
+Dec 10 11:16:13.225: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Dec 10 11:16:13.225: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Dec 10 11:16:13.226: INFO: update-demo-nautilus-qvb4p is verified up and running
+STEP: scaling down the replication controller
+Dec 10 11:16:13.227: INFO: scanned /root for discovery docs: 
+Dec 10 11:16:13.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3617'
+Dec 10 11:16:14.363: INFO: stderr: ""
+Dec 10 11:16:14.363: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
+STEP: waiting for all containers in name=update-demo pods to come up.
+Dec 10 11:16:14.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3617'
+Dec 10 11:16:14.459: INFO: stderr: ""
+Dec 10 11:16:14.459: INFO: stdout: "update-demo-nautilus-pmbrd update-demo-nautilus-qvb4p "
+STEP: Replicas for name=update-demo: expected=1 actual=2
+Dec 10 11:16:19.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3617'
+Dec 10 11:16:19.548: INFO: stderr: ""
+Dec 10 11:16:19.548: INFO: stdout: "update-demo-nautilus-pmbrd update-demo-nautilus-qvb4p "
+STEP: Replicas for name=update-demo: expected=1 actual=2
+Dec 10 11:16:24.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3617'
+Dec 10 11:16:24.642: INFO: stderr: ""
+Dec 10 11:16:24.642: INFO: stdout: "update-demo-nautilus-pmbrd "
+Dec 10 11:16:24.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-pmbrd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3617'
+Dec 10 11:16:24.719: INFO: stderr: ""
+Dec 10 11:16:24.719: INFO: stdout: "true"
+Dec 10 11:16:24.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-pmbrd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3617'
+Dec 10 11:16:24.804: INFO: stderr: ""
+Dec 10 11:16:24.804: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Dec 10 11:16:24.804: INFO: validating pod update-demo-nautilus-pmbrd
+Dec 10 11:16:24.808: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Dec 10 11:16:24.808: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Dec 10 11:16:24.808: INFO: update-demo-nautilus-pmbrd is verified up and running
+STEP: scaling up the replication controller
+Dec 10 11:16:24.809: INFO: scanned /root for discovery docs: 
+Dec 10 11:16:24.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3617'
+Dec 10 11:16:25.916: INFO: stderr: ""
+Dec 10 11:16:25.917: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
+STEP: waiting for all containers in name=update-demo pods to come up.
+Dec 10 11:16:25.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3617'
+Dec 10 11:16:26.012: INFO: stderr: ""
+Dec 10 11:16:26.012: INFO: stdout: "update-demo-nautilus-pmbrd update-demo-nautilus-xjxgl "
+Dec 10 11:16:26.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-pmbrd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3617'
+Dec 10 11:16:26.092: INFO: stderr: ""
+Dec 10 11:16:26.092: INFO: stdout: "true"
+Dec 10 11:16:26.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-pmbrd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3617'
+Dec 10 11:16:26.168: INFO: stderr: ""
+Dec 10 11:16:26.168: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Dec 10 11:16:26.168: INFO: validating pod update-demo-nautilus-pmbrd
+Dec 10 11:16:26.174: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Dec 10 11:16:26.174: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Dec 10 11:16:26.174: INFO: update-demo-nautilus-pmbrd is verified up and running
+Dec 10 11:16:26.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-xjxgl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3617'
+Dec 10 11:16:26.262: INFO: stderr: ""
+Dec 10 11:16:26.262: INFO: stdout: ""
+Dec 10 11:16:26.262: INFO: update-demo-nautilus-xjxgl is created but not running
+Dec 10 11:16:31.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3617'
+Dec 10 11:16:31.362: INFO: stderr: ""
+Dec 10 11:16:31.362: INFO: stdout: "update-demo-nautilus-pmbrd update-demo-nautilus-xjxgl "
+Dec 10 11:16:31.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-pmbrd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3617'
+Dec 10 11:16:31.453: INFO: stderr: ""
+Dec 10 11:16:31.453: INFO: stdout: "true"
+Dec 10 11:16:31.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-pmbrd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3617'
+Dec 10 11:16:31.542: INFO: stderr: ""
+Dec 10 11:16:31.542: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Dec 10 11:16:31.542: INFO: validating pod update-demo-nautilus-pmbrd
+Dec 10 11:16:31.547: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Dec 10 11:16:31.547: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Dec 10 11:16:31.547: INFO: update-demo-nautilus-pmbrd is verified up and running
+Dec 10 11:16:31.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-xjxgl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3617'
+Dec 10 11:16:31.637: INFO: stderr: ""
+Dec 10 11:16:31.637: INFO: stdout: "true"
+Dec 10 11:16:31.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-xjxgl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3617'
+Dec 10 11:16:31.714: INFO: stderr: ""
+Dec 10 11:16:31.714: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Dec 10 11:16:31.714: INFO: validating pod update-demo-nautilus-xjxgl
+Dec 10 11:16:31.719: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Dec 10 11:16:31.719: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Dec 10 11:16:31.719: INFO: update-demo-nautilus-xjxgl is verified up and running
+STEP: using delete to clean up resources
+Dec 10 11:16:31.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 delete --grace-period=0 --force -f - --namespace=kubectl-3617'
+Dec 10 11:16:31.802: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Dec 10 11:16:31.802: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
+Dec 10 11:16:31.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3617'
+Dec 10 11:16:31.895: INFO: stderr: "No resources found.\n"
+Dec 10 11:16:31.895: INFO: stdout: ""
+Dec 10 11:16:31.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods -l name=update-demo --namespace=kubectl-3617 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
+Dec 10 11:16:31.986: INFO: stderr: ""
+Dec 10 11:16:31.986: INFO: stdout: "update-demo-nautilus-pmbrd\nupdate-demo-nautilus-xjxgl\n"
+Dec 10 11:16:32.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3617'
+Dec 10 11:16:32.582: INFO: stderr: "No resources found.\n"
+Dec 10 11:16:32.582: INFO: stdout: ""
+Dec 10 11:16:32.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods -l name=update-demo --namespace=kubectl-3617 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
+Dec 10 11:16:32.658: INFO: stderr: ""
+Dec 10 11:16:32.658: INFO: stdout: ""
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:16:32.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-3617" for this suite.
+Dec 10 11:16:38.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:16:38.751: INFO: namespace kubectl-3617 deletion completed in 6.089653023s
+
+• [SLOW TEST:31.495 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Update Demo
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+    should scale a replication controller  [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] ReplicationController 
+  should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-apps] ReplicationController
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:16:38.752: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename replication-controller
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-1662
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating replication controller my-hostname-basic-80eab96e-d31a-4034-a119-106c525042fd
+Dec 10 11:16:38.929: INFO: Pod name my-hostname-basic-80eab96e-d31a-4034-a119-106c525042fd: Found 0 pods out of 1
+Dec 10 11:16:43.934: INFO: Pod name my-hostname-basic-80eab96e-d31a-4034-a119-106c525042fd: Found 1 pods out of 1
+Dec 10 11:16:43.934: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-80eab96e-d31a-4034-a119-106c525042fd" are running
+Dec 10 11:16:43.938: INFO: Pod "my-hostname-basic-80eab96e-d31a-4034-a119-106c525042fd-f7tjd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-10 11:16:38 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-10 11:16:41 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-10 11:16:41 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-10 11:16:38 +0000 UTC Reason: Message:}])
+Dec 10 11:16:43.938: INFO: Trying to dial the pod
+Dec 10 11:16:48.952: INFO: Controller my-hostname-basic-80eab96e-d31a-4034-a119-106c525042fd: Got expected result from replica 1 [my-hostname-basic-80eab96e-d31a-4034-a119-106c525042fd-f7tjd]: "my-hostname-basic-80eab96e-d31a-4034-a119-106c525042fd-f7tjd", 1 of 1 required successes so far
+[AfterEach] [sig-apps] ReplicationController
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:16:48.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "replication-controller-1662" for this suite.
+Dec 10 11:16:54.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:16:55.042: INFO: namespace replication-controller-1662 deletion completed in 6.085497276s
+
+• [SLOW TEST:16.289 seconds]
+[sig-apps] ReplicationController
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
+  should create a pod from an image when restart is Never  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:16:55.042: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename kubectl
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-3752
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
+[BeforeEach] [k8s.io] Kubectl run pod
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686
+[It] should create a pod from an image when restart is Never  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: running the image docker.io/library/nginx:1.14-alpine
+Dec 10 11:16:55.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3752'
+Dec 10 11:16:55.293: INFO: stderr: ""
+Dec 10 11:16:55.293: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
+STEP: verifying the pod e2e-test-nginx-pod was created
+[AfterEach] [k8s.io] Kubectl run pod
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1691
+Dec 10 11:16:55.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 delete pods e2e-test-nginx-pod --namespace=kubectl-3752'
+Dec 10 11:16:58.788: INFO: stderr: ""
+Dec 10 11:16:58.789: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:16:58.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-3752" for this suite.
+Dec 10 11:17:04.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:17:04.885: INFO: namespace kubectl-3752 deletion completed in 6.093624034s
+
+• [SLOW TEST:9.843 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Kubectl run pod
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+    should create a pod from an image when restart is Never  [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSS
+------------------------------
+[sig-apps] ReplicationController 
+  should adopt matching pods on creation [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-apps] ReplicationController
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:17:04.885: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename replication-controller
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-4911
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should adopt matching pods on creation [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Given a Pod with a 'name' label pod-adoption is created
+STEP: When a replication controller with a matching selector is created
+STEP: Then the orphan pod is adopted
+[AfterEach] [sig-apps] ReplicationController
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:17:08.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "replication-controller-4911" for this suite.
+Dec 10 11:17:30.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:17:30.129: INFO: namespace replication-controller-4911 deletion completed in 22.071812511s
+
+• [SLOW TEST:25.244 seconds]
+[sig-apps] ReplicationController
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should adopt matching pods on creation [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SS
+------------------------------
+[k8s.io] Variable Expansion 
+  should allow substituting values in a container's args [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Variable Expansion
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:17:30.129: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename var-expansion
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-8219
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test substitution in container's args
+Dec 10 11:17:30.288: INFO: Waiting up to 5m0s for pod "var-expansion-73fd83f7-e1a1-40c2-9212-11c9020ebb95" in namespace "var-expansion-8219" to be "success or failure"
+Dec 10 11:17:30.290: INFO: Pod "var-expansion-73fd83f7-e1a1-40c2-9212-11c9020ebb95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.253352ms
+Dec 10 11:17:32.293: INFO: Pod "var-expansion-73fd83f7-e1a1-40c2-9212-11c9020ebb95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004653076s
+STEP: Saw pod success
+Dec 10 11:17:32.293: INFO: Pod "var-expansion-73fd83f7-e1a1-40c2-9212-11c9020ebb95" satisfied condition "success or failure"
+Dec 10 11:17:32.294: INFO: Trying to get logs from node dce82 pod var-expansion-73fd83f7-e1a1-40c2-9212-11c9020ebb95 container dapi-container: 
+STEP: delete the pod
+Dec 10 11:17:32.307: INFO: Waiting for pod var-expansion-73fd83f7-e1a1-40c2-9212-11c9020ebb95 to disappear
+Dec 10 11:17:32.309: INFO: Pod var-expansion-73fd83f7-e1a1-40c2-9212-11c9020ebb95 no longer exists
+[AfterEach] [k8s.io] Variable Expansion
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:17:32.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "var-expansion-8219" for this suite.
+Dec 10 11:17:38.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:17:38.398: INFO: namespace var-expansion-8219 deletion completed in 6.085734986s
+
+• [SLOW TEST:8.270 seconds]
+[k8s.io] Variable Expansion
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  should allow substituting values in a container's args [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:17:38.399: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename emptydir
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-2259
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test emptydir 0666 on tmpfs
+Dec 10 11:17:38.550: INFO: Waiting up to 5m0s for pod "pod-0fb0ffd9-51be-44ed-be95-e68ce562c14d" in namespace "emptydir-2259" to be "success or failure"
+Dec 10 11:17:38.553: INFO: Pod "pod-0fb0ffd9-51be-44ed-be95-e68ce562c14d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.14961ms
+Dec 10 11:17:40.557: INFO: Pod "pod-0fb0ffd9-51be-44ed-be95-e68ce562c14d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006813664s
+STEP: Saw pod success
+Dec 10 11:17:40.557: INFO: Pod "pod-0fb0ffd9-51be-44ed-be95-e68ce562c14d" satisfied condition "success or failure"
+Dec 10 11:17:40.560: INFO: Trying to get logs from node dce82 pod pod-0fb0ffd9-51be-44ed-be95-e68ce562c14d container test-container: 
+STEP: delete the pod
+Dec 10 11:17:40.579: INFO: Waiting for pod pod-0fb0ffd9-51be-44ed-be95-e68ce562c14d to disappear
+Dec 10 11:17:40.582: INFO: Pod pod-0fb0ffd9-51be-44ed-be95-e68ce562c14d no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:17:40.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-2259" for this suite.
+Dec 10 11:17:46.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:17:46.672: INFO: namespace emptydir-2259 deletion completed in 6.086831189s
+
+• [SLOW TEST:8.273 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:17:46.672: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-676
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test downward API volume plugin
+Dec 10 11:17:46.816: INFO: Waiting up to 5m0s for pod "downwardapi-volume-286abce1-fb8f-495a-b77c-72148ab5047e" in namespace "projected-676" to be "success or failure"
+Dec 10 11:17:46.819: INFO: Pod "downwardapi-volume-286abce1-fb8f-495a-b77c-72148ab5047e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.892047ms
+Dec 10 11:17:48.821: INFO: Pod "downwardapi-volume-286abce1-fb8f-495a-b77c-72148ab5047e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005835618s
+STEP: Saw pod success
+Dec 10 11:17:48.822: INFO: Pod "downwardapi-volume-286abce1-fb8f-495a-b77c-72148ab5047e" satisfied condition "success or failure"
+Dec 10 11:17:48.824: INFO: Trying to get logs from node dce82 pod downwardapi-volume-286abce1-fb8f-495a-b77c-72148ab5047e container client-container: 
+STEP: delete the pod
+Dec 10 11:17:48.843: INFO: Waiting for pod downwardapi-volume-286abce1-fb8f-495a-b77c-72148ab5047e to disappear
+Dec 10 11:17:48.848: INFO: Pod downwardapi-volume-286abce1-fb8f-495a-b77c-72148ab5047e no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:17:48.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-676" for this suite.
+Dec 10 11:17:54.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:17:54.941: INFO: namespace projected-676 deletion completed in 6.088849365s
+
+• [SLOW TEST:8.269 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:17:54.941: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename emptydir
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-6195
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test emptydir 0666 on tmpfs
+Dec 10 11:17:55.091: INFO: Waiting up to 5m0s for pod "pod-f63baeda-b71b-4f1a-b613-f46ff8bc49aa" in namespace "emptydir-6195" to be "success or failure"
+Dec 10 11:17:55.093: INFO: Pod "pod-f63baeda-b71b-4f1a-b613-f46ff8bc49aa": Phase="Pending", Reason="", readiness=false. Elapsed: 1.744976ms
+Dec 10 11:17:57.096: INFO: Pod "pod-f63baeda-b71b-4f1a-b613-f46ff8bc49aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005057941s
+STEP: Saw pod success
+Dec 10 11:17:57.096: INFO: Pod "pod-f63baeda-b71b-4f1a-b613-f46ff8bc49aa" satisfied condition "success or failure"
+Dec 10 11:17:57.098: INFO: Trying to get logs from node dce82 pod pod-f63baeda-b71b-4f1a-b613-f46ff8bc49aa container test-container: 
+STEP: delete the pod
+Dec 10 11:17:57.111: INFO: Waiting for pod pod-f63baeda-b71b-4f1a-b613-f46ff8bc49aa to disappear
+Dec 10 11:17:57.114: INFO: Pod pod-f63baeda-b71b-4f1a-b613-f46ff8bc49aa no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:17:57.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-6195" for this suite.
+Dec 10 11:18:03.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:18:03.202: INFO: namespace emptydir-6195 deletion completed in 6.084648795s
+
+• [SLOW TEST:8.261 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl patch 
+  should add annotations for pods in rc  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:18:03.202: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename kubectl
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2217
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
+[It] should add annotations for pods in rc  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: creating Redis RC
+Dec 10 11:18:03.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 create -f - --namespace=kubectl-2217'
+Dec 10 11:18:03.595: INFO: stderr: ""
+Dec 10 11:18:03.595: INFO: stdout: "replicationcontroller/redis-master created\n"
+STEP: Waiting for Redis master to start.
+Dec 10 11:18:04.598: INFO: Selector matched 1 pods for map[app:redis]
+Dec 10 11:18:04.598: INFO: Found 0 / 1
+Dec 10 11:18:05.598: INFO: Selector matched 1 pods for map[app:redis]
+Dec 10 11:18:05.598: INFO: Found 0 / 1
+Dec 10 11:18:06.598: INFO: Selector matched 1 pods for map[app:redis]
+Dec 10 11:18:06.598: INFO: Found 1 / 1
+Dec 10 11:18:06.598: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
+STEP: patching all pods
+Dec 10 11:18:06.601: INFO: Selector matched 1 pods for map[app:redis]
+Dec 10 11:18:06.601: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
+Dec 10 11:18:06.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 patch pod redis-master-c5qzr --namespace=kubectl-2217 -p {"metadata":{"annotations":{"x":"y"}}}'
+Dec 10 11:18:06.694: INFO: stderr: ""
+Dec 10 11:18:06.694: INFO: stdout: "pod/redis-master-c5qzr patched\n"
+STEP: checking annotations
+Dec 10 11:18:06.696: INFO: Selector matched 1 pods for map[app:redis]
+Dec 10 11:18:06.696: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:18:06.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-2217" for this suite.
+Dec 10 11:18:28.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:18:28.787: INFO: namespace kubectl-2217 deletion completed in 22.088470542s
+
+• [SLOW TEST:25.585 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Kubectl patch
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+    should add annotations for pods in rc  [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:18:28.787: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename emptydir
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-5201
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test emptydir 0644 on tmpfs
+Dec 10 11:18:28.939: INFO: Waiting up to 5m0s for pod "pod-c9ebacf5-bd2e-401f-8fb9-060d9af1ff3b" in namespace "emptydir-5201" to be "success or failure"
+Dec 10 11:18:28.945: INFO: Pod "pod-c9ebacf5-bd2e-401f-8fb9-060d9af1ff3b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.604261ms
+Dec 10 11:18:30.948: INFO: Pod "pod-c9ebacf5-bd2e-401f-8fb9-060d9af1ff3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009180792s
+STEP: Saw pod success
+Dec 10 11:18:30.948: INFO: Pod "pod-c9ebacf5-bd2e-401f-8fb9-060d9af1ff3b" satisfied condition "success or failure"
+Dec 10 11:18:30.951: INFO: Trying to get logs from node dce82 pod pod-c9ebacf5-bd2e-401f-8fb9-060d9af1ff3b container test-container: 
+STEP: delete the pod
+Dec 10 11:18:30.967: INFO: Waiting for pod pod-c9ebacf5-bd2e-401f-8fb9-060d9af1ff3b to disappear
+Dec 10 11:18:30.970: INFO: Pod pod-c9ebacf5-bd2e-401f-8fb9-060d9af1ff3b no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:18:30.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-5201" for this suite.
+Dec 10 11:18:36.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:18:37.050: INFO: namespace emptydir-5201 deletion completed in 6.075537239s
+
+• [SLOW TEST:8.262 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSS
+------------------------------
+[sig-network] Services 
+  should serve multiport endpoints from pods  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:18:37.050: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename services
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-2855
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
+[It] should serve multiport endpoints from pods  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: creating service multi-endpoint-test in namespace services-2855
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2855 to expose endpoints map[]
+Dec 10 11:18:37.207: INFO: Get endpoints failed (3.007918ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
+Dec 10 11:18:38.210: INFO: successfully validated that service multi-endpoint-test in namespace services-2855 exposes endpoints map[] (1.006101114s elapsed)
+STEP: Creating pod pod1 in namespace services-2855
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2855 to expose endpoints map[pod1:[100]]
+Dec 10 11:18:40.234: INFO: successfully validated that service multi-endpoint-test in namespace services-2855 exposes endpoints map[pod1:[100]] (2.020108776s elapsed)
+STEP: Creating pod pod2 in namespace services-2855
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2855 to expose endpoints map[pod1:[100] pod2:[101]]
+Dec 10 11:18:43.276: INFO: successfully validated that service multi-endpoint-test in namespace services-2855 exposes endpoints map[pod1:[100] pod2:[101]] (3.036943967s elapsed)
+STEP: Deleting pod pod1 in namespace services-2855
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2855 to expose endpoints map[pod2:[101]]
+Dec 10 11:18:43.287: INFO: successfully validated that service multi-endpoint-test in namespace services-2855 exposes endpoints map[pod2:[101]] (6.259313ms elapsed)
+STEP: Deleting pod pod2 in namespace services-2855
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2855 to expose endpoints map[]
+Dec 10 11:18:44.296: INFO: successfully validated that service multi-endpoint-test in namespace services-2855 exposes endpoints map[] (1.004992975s elapsed)
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:18:44.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "services-2855" for this suite.
+Dec 10 11:19:06.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:19:06.398: INFO: namespace services-2855 deletion completed in 22.087786621s
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92
+
+• [SLOW TEST:29.348 seconds]
+[sig-network] Services
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should serve multiport endpoints from pods  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Proxy server 
+  should support --unix-socket=/path  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:19:06.399: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename kubectl
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-717
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
+[It] should support --unix-socket=/path  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Starting the proxy
+Dec 10 11:19:06.544: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-845205613 proxy --unix-socket=/tmp/kubectl-proxy-unix478471912/test'
+STEP: retrieving proxy /api/ output
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:19:06.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-717" for this suite.
+Dec 10 11:19:12.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:19:12.690: INFO: namespace kubectl-717 deletion completed in 6.085170052s
+
+• [SLOW TEST:6.291 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Proxy server
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+    should support --unix-socket=/path  [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+S
+------------------------------
+[sig-network] DNS 
+  should provide DNS for ExternalName services [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-network] DNS
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:19:12.690: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename dns
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-2508
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide DNS for ExternalName services [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a test externalName service
+STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2508.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2508.svc.cluster.local; sleep 1; done
+
+STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2508.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2508.svc.cluster.local; sleep 1; done
+
+STEP: creating a pod to probe DNS
+STEP: submitting the pod to kubernetes
+STEP: retrieving the pod
+STEP: looking for the results for each expected name from probers
+Dec 10 11:19:16.869: INFO: DNS probes using dns-test-547f4e57-a19b-47b1-8f29-4019fc59209b succeeded
+
+STEP: deleting the pod
+STEP: changing the externalName to bar.example.com
+STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2508.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2508.svc.cluster.local; sleep 1; done
+
+STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2508.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2508.svc.cluster.local; sleep 1; done
+
+STEP: creating a second pod to probe DNS
+STEP: submitting the pod to kubernetes
+STEP: retrieving the pod
+STEP: looking for the results for each expected name from probers
+Dec 10 11:19:20.902: INFO: File wheezy_udp@dns-test-service-3.dns-2508.svc.cluster.local from pod  dns-2508/dns-test-79319b62-30fb-41e2-92c5-43c6764821c5 contains 'foo.example.com.
+' instead of 'bar.example.com.'
+Dec 10 11:19:20.904: INFO: File jessie_udp@dns-test-service-3.dns-2508.svc.cluster.local from pod  dns-2508/dns-test-79319b62-30fb-41e2-92c5-43c6764821c5 contains '' instead of 'bar.example.com.'
+Dec 10 11:19:20.904: INFO: Lookups using dns-2508/dns-test-79319b62-30fb-41e2-92c5-43c6764821c5 failed for: [wheezy_udp@dns-test-service-3.dns-2508.svc.cluster.local jessie_udp@dns-test-service-3.dns-2508.svc.cluster.local]
+
+Dec 10 11:19:25.907: INFO: File wheezy_udp@dns-test-service-3.dns-2508.svc.cluster.local from pod  dns-2508/dns-test-79319b62-30fb-41e2-92c5-43c6764821c5 contains 'foo.example.com.
+' instead of 'bar.example.com.'
+Dec 10 11:19:25.910: INFO: File jessie_udp@dns-test-service-3.dns-2508.svc.cluster.local from pod  dns-2508/dns-test-79319b62-30fb-41e2-92c5-43c6764821c5 contains 'foo.example.com.
+' instead of 'bar.example.com.'
+Dec 10 11:19:25.910: INFO: Lookups using dns-2508/dns-test-79319b62-30fb-41e2-92c5-43c6764821c5 failed for: [wheezy_udp@dns-test-service-3.dns-2508.svc.cluster.local jessie_udp@dns-test-service-3.dns-2508.svc.cluster.local]
+
+Dec 10 11:19:30.908: INFO: File wheezy_udp@dns-test-service-3.dns-2508.svc.cluster.local from pod  dns-2508/dns-test-79319b62-30fb-41e2-92c5-43c6764821c5 contains 'foo.example.com.
+' instead of 'bar.example.com.'
+Dec 10 11:19:30.912: INFO: File jessie_udp@dns-test-service-3.dns-2508.svc.cluster.local from pod  dns-2508/dns-test-79319b62-30fb-41e2-92c5-43c6764821c5 contains 'foo.example.com.
+' instead of 'bar.example.com.'
+Dec 10 11:19:30.912: INFO: Lookups using dns-2508/dns-test-79319b62-30fb-41e2-92c5-43c6764821c5 failed for: [wheezy_udp@dns-test-service-3.dns-2508.svc.cluster.local jessie_udp@dns-test-service-3.dns-2508.svc.cluster.local]
+
+Dec 10 11:19:35.908: INFO: File wheezy_udp@dns-test-service-3.dns-2508.svc.cluster.local from pod  dns-2508/dns-test-79319b62-30fb-41e2-92c5-43c6764821c5 contains 'foo.example.com.
+' instead of 'bar.example.com.'
+Dec 10 11:19:35.911: INFO: File jessie_udp@dns-test-service-3.dns-2508.svc.cluster.local from pod  dns-2508/dns-test-79319b62-30fb-41e2-92c5-43c6764821c5 contains 'foo.example.com.
+' instead of 'bar.example.com.'
+Dec 10 11:19:35.911: INFO: Lookups using dns-2508/dns-test-79319b62-30fb-41e2-92c5-43c6764821c5 failed for: [wheezy_udp@dns-test-service-3.dns-2508.svc.cluster.local jessie_udp@dns-test-service-3.dns-2508.svc.cluster.local]
+
+Dec 10 11:19:40.907: INFO: File wheezy_udp@dns-test-service-3.dns-2508.svc.cluster.local from pod  dns-2508/dns-test-79319b62-30fb-41e2-92c5-43c6764821c5 contains 'foo.example.com.
+' instead of 'bar.example.com.'
+Dec 10 11:19:40.922: INFO: File jessie_udp@dns-test-service-3.dns-2508.svc.cluster.local from pod  dns-2508/dns-test-79319b62-30fb-41e2-92c5-43c6764821c5 contains 'foo.example.com.
+' instead of 'bar.example.com.'
+Dec 10 11:19:40.922: INFO: Lookups using dns-2508/dns-test-79319b62-30fb-41e2-92c5-43c6764821c5 failed for: [wheezy_udp@dns-test-service-3.dns-2508.svc.cluster.local jessie_udp@dns-test-service-3.dns-2508.svc.cluster.local]
+
+Dec 10 11:19:45.910: INFO: DNS probes using dns-test-79319b62-30fb-41e2-92c5-43c6764821c5 succeeded
+
+STEP: deleting the pod
+STEP: changing the service to type=ClusterIP
+STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2508.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2508.svc.cluster.local; sleep 1; done
+
+STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2508.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2508.svc.cluster.local; sleep 1; done
+
+STEP: creating a third pod to probe DNS
+STEP: submitting the pod to kubernetes
+STEP: retrieving the pod
+STEP: looking for the results for each expected name from probers
+Dec 10 11:19:49.952: INFO: File jessie_udp@dns-test-service-3.dns-2508.svc.cluster.local from pod  dns-2508/dns-test-abe06173-9794-41c9-a503-17ce2d201aad contains '' instead of '10.96.3.212'
+Dec 10 11:19:49.952: INFO: Lookups using dns-2508/dns-test-abe06173-9794-41c9-a503-17ce2d201aad failed for: [jessie_udp@dns-test-service-3.dns-2508.svc.cluster.local]
+
+Dec 10 11:19:54.962: INFO: DNS probes using dns-test-abe06173-9794-41c9-a503-17ce2d201aad succeeded
+
+STEP: deleting the pod
+STEP: deleting the test externalName service
+[AfterEach] [sig-network] DNS
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:19:54.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "dns-2508" for this suite.
+Dec 10 11:20:00.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:20:01.073: INFO: namespace dns-2508 deletion completed in 6.087505465s
+
+• [SLOW TEST:48.383 seconds]
+[sig-network] DNS
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should provide DNS for ExternalName services [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:20:01.074: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename downward-api
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-8367
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
+[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test downward API volume plugin
+Dec 10 11:20:01.220: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87cf4ed6-8882-4384-b18b-4e3d8e590001" in namespace "downward-api-8367" to be "success or failure"
+Dec 10 11:20:01.223: INFO: Pod "downwardapi-volume-87cf4ed6-8882-4384-b18b-4e3d8e590001": Phase="Pending", Reason="", readiness=false. Elapsed: 2.368754ms
+Dec 10 11:20:03.227: INFO: Pod "downwardapi-volume-87cf4ed6-8882-4384-b18b-4e3d8e590001": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007032372s
+STEP: Saw pod success
+Dec 10 11:20:03.227: INFO: Pod "downwardapi-volume-87cf4ed6-8882-4384-b18b-4e3d8e590001" satisfied condition "success or failure"
+Dec 10 11:20:03.231: INFO: Trying to get logs from node dce82 pod downwardapi-volume-87cf4ed6-8882-4384-b18b-4e3d8e590001 container client-container: 
+STEP: delete the pod
+Dec 10 11:20:03.249: INFO: Waiting for pod downwardapi-volume-87cf4ed6-8882-4384-b18b-4e3d8e590001 to disappear
+Dec 10 11:20:03.254: INFO: Pod downwardapi-volume-87cf4ed6-8882-4384-b18b-4e3d8e590001 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:20:03.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-8367" for this suite.
+Dec 10 11:20:09.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:20:09.332: INFO: namespace downward-api-8367 deletion completed in 6.074195045s
+
+• [SLOW TEST:8.258 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
+  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
+  should be submitted and removed [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] [sig-node] Pods Extended
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:20:09.333: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename pods
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-229
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Delete Grace Period
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
+[It] should be submitted and removed [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: creating the pod
+STEP: setting up selector
+STEP: submitting the pod to kubernetes
+STEP: verifying the pod is in kubernetes
+Dec 10 11:20:11.498: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-845205613 proxy -p 0'
+STEP: deleting the pod gracefully
+STEP: verifying the kubelet observed the termination notice
+Dec 10 11:20:26.583: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
+[AfterEach] [k8s.io] [sig-node] Pods Extended
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:20:26.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pods-229" for this suite.
+Dec 10 11:20:32.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:20:32.660: INFO: namespace pods-229 deletion completed in 6.071472821s
+
+• [SLOW TEST:23.327 seconds]
+[k8s.io] [sig-node] Pods Extended
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  [k8s.io] Delete Grace Period
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+    should be submitted and removed [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSS
+------------------------------
+[sig-api-machinery] Aggregator 
+  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-api-machinery] Aggregator
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:20:32.661: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename aggregator
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in aggregator-5738
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] Aggregator
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
+Dec 10 11:20:32.868: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Registering the sample API server.
+Dec 10 11:20:33.391: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
+Dec 10 11:20:35.416: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63711573633, loc:(*time.Location)(0x7ec7a20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63711573633, loc:(*time.Location)(0x7ec7a20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63711573633, loc:(*time.Location)(0x7ec7a20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63711573633, loc:(*time.Location)(0x7ec7a20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Dec 10 11:20:46.657: INFO: Waited 9.23064213s for the sample-apiserver to be ready to handle requests.
+[AfterEach] [sig-api-machinery] Aggregator
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
+[AfterEach] [sig-api-machinery] Aggregator
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:20:47.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "aggregator-5738" for this suite.
+Dec 10 11:20:53.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:20:53.402: INFO: namespace aggregator-5738 deletion completed in 6.17963938s
+
+• [SLOW TEST:20.742 seconds]
+[sig-api-machinery] Aggregator
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSS
+------------------------------
+[sig-network] Proxy version v1 
+  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] version v1
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:20:53.402: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename proxy
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-8088
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+Dec 10 11:20:53.551: INFO: (0) /api/v1/nodes/dce81:10250/proxy/logs/: 
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename kubectl
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7029
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
+[BeforeEach] [k8s.io] Update Demo
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
+[It] should do a rolling update of a replication controller  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: creating the initial replication controller
+Dec 10 11:20:59.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 create -f - --namespace=kubectl-7029'
+Dec 10 11:21:00.066: INFO: stderr: ""
+Dec 10 11:21:00.066: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
+STEP: waiting for all containers in name=update-demo pods to come up.
+Dec 10 11:21:00.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7029'
+Dec 10 11:21:00.146: INFO: stderr: ""
+Dec 10 11:21:00.146: INFO: stdout: "update-demo-nautilus-lpqd8 update-demo-nautilus-zlr9x "
+Dec 10 11:21:00.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-lpqd8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7029'
+Dec 10 11:21:00.220: INFO: stderr: ""
+Dec 10 11:21:00.220: INFO: stdout: ""
+Dec 10 11:21:00.220: INFO: update-demo-nautilus-lpqd8 is created but not running
+Dec 10 11:21:05.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7029'
+Dec 10 11:21:05.317: INFO: stderr: ""
+Dec 10 11:21:05.317: INFO: stdout: "update-demo-nautilus-lpqd8 update-demo-nautilus-zlr9x "
+Dec 10 11:21:05.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-lpqd8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7029'
+Dec 10 11:21:05.405: INFO: stderr: ""
+Dec 10 11:21:05.405: INFO: stdout: "true"
+Dec 10 11:21:05.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-lpqd8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7029'
+Dec 10 11:21:05.484: INFO: stderr: ""
+Dec 10 11:21:05.484: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Dec 10 11:21:05.484: INFO: validating pod update-demo-nautilus-lpqd8
+Dec 10 11:21:05.489: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Dec 10 11:21:05.489: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Dec 10 11:21:05.489: INFO: update-demo-nautilus-lpqd8 is verified up and running
+Dec 10 11:21:05.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-zlr9x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7029'
+Dec 10 11:21:05.580: INFO: stderr: ""
+Dec 10 11:21:05.580: INFO: stdout: "true"
+Dec 10 11:21:05.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-nautilus-zlr9x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7029'
+Dec 10 11:21:05.653: INFO: stderr: ""
+Dec 10 11:21:05.653: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Dec 10 11:21:05.653: INFO: validating pod update-demo-nautilus-zlr9x
+Dec 10 11:21:05.658: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Dec 10 11:21:05.658: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Dec 10 11:21:05.658: INFO: update-demo-nautilus-zlr9x is verified up and running
+STEP: rolling-update to new replication controller
+Dec 10 11:21:05.659: INFO: scanned /root for discovery docs: 
+Dec 10 11:21:05.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-7029'
+Dec 10 11:21:28.002: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
+Dec 10 11:21:28.002: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
+STEP: waiting for all containers in name=update-demo pods to come up.
+Dec 10 11:21:28.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7029'
+Dec 10 11:21:28.097: INFO: stderr: ""
+Dec 10 11:21:28.097: INFO: stdout: "update-demo-kitten-lkb5p update-demo-kitten-sc2kd "
+Dec 10 11:21:28.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-kitten-lkb5p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7029'
+Dec 10 11:21:28.173: INFO: stderr: ""
+Dec 10 11:21:28.173: INFO: stdout: "true"
+Dec 10 11:21:28.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-kitten-lkb5p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7029'
+Dec 10 11:21:28.264: INFO: stderr: ""
+Dec 10 11:21:28.264: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
+Dec 10 11:21:28.264: INFO: validating pod update-demo-kitten-lkb5p
+Dec 10 11:21:28.268: INFO: got data: {
+  "image": "kitten.jpg"
+}
+
+Dec 10 11:21:28.268: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
+Dec 10 11:21:28.268: INFO: update-demo-kitten-lkb5p is verified up and running
+Dec 10 11:21:28.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-kitten-sc2kd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7029'
+Dec 10 11:21:28.349: INFO: stderr: ""
+Dec 10 11:21:28.349: INFO: stdout: "true"
+Dec 10 11:21:28.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 get pods update-demo-kitten-sc2kd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7029'
+Dec 10 11:21:28.422: INFO: stderr: ""
+Dec 10 11:21:28.422: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
+Dec 10 11:21:28.422: INFO: validating pod update-demo-kitten-sc2kd
+Dec 10 11:21:28.428: INFO: got data: {
+  "image": "kitten.jpg"
+}
+
+Dec 10 11:21:28.428: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
+Dec 10 11:21:28.428: INFO: update-demo-kitten-sc2kd is verified up and running
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:21:28.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-7029" for this suite.
+Dec 10 11:21:50.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:21:50.518: INFO: namespace kubectl-7029 deletion completed in 22.085787415s
+
+• [SLOW TEST:50.821 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Update Demo
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+    should do a rolling update of a replication controller  [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  binary data should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:21:50.519: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename configmap
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-4358
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] binary data should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating configMap with name configmap-test-upd-c761621f-2e38-40e3-ad9a-f4950c41c596
+STEP: Creating the pod
+STEP: Waiting for pod with text data
+STEP: Waiting for pod with binary data
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:21:52.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-4358" for this suite.
+Dec 10 11:22:14.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:22:14.794: INFO: namespace configmap-4358 deletion completed in 22.090669341s
+
+• [SLOW TEST:24.276 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
+  binary data should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSS
+------------------------------
+[sig-apps] Daemon set [Serial] 
+  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:22:14.795: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename daemonsets
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-3688
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
+[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+Dec 10 11:22:14.955: INFO: Creating simple daemon set daemon-set
+STEP: Check that daemon pods launch on every node of the cluster.
+Dec 10 11:22:14.997: INFO: Number of nodes with available pods: 0
+Dec 10 11:22:14.997: INFO: Node dce81 is running more than one daemon pod
+Dec 10 11:22:16.002: INFO: Number of nodes with available pods: 0
+Dec 10 11:22:16.002: INFO: Node dce81 is running more than one daemon pod
+Dec 10 11:22:17.003: INFO: Number of nodes with available pods: 2
+Dec 10 11:22:17.003: INFO: Node dce81 is running more than one daemon pod
+Dec 10 11:22:18.003: INFO: Number of nodes with available pods: 3
+Dec 10 11:22:18.003: INFO: Number of running nodes: 3, number of available pods: 3
+STEP: Update daemon pods image.
+STEP: Check that daemon pods images are updated.
+Dec 10 11:22:18.018: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:18.018: INFO: Wrong image for pod: daemon-set-jtl75. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:18.018: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:19.024: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:19.025: INFO: Wrong image for pod: daemon-set-jtl75. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:19.025: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:20.025: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:20.025: INFO: Wrong image for pod: daemon-set-jtl75. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:20.025: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:21.024: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:21.024: INFO: Wrong image for pod: daemon-set-jtl75. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:21.024: INFO: Pod daemon-set-jtl75 is not available
+Dec 10 11:22:21.024: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:22.024: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:22.024: INFO: Wrong image for pod: daemon-set-jtl75. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:22.024: INFO: Pod daemon-set-jtl75 is not available
+Dec 10 11:22:22.024: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:23.025: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:23.025: INFO: Wrong image for pod: daemon-set-jtl75. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:23.025: INFO: Pod daemon-set-jtl75 is not available
+Dec 10 11:22:23.025: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:24.025: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:24.025: INFO: Wrong image for pod: daemon-set-jtl75. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:24.025: INFO: Pod daemon-set-jtl75 is not available
+Dec 10 11:22:24.025: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:25.025: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:25.025: INFO: Wrong image for pod: daemon-set-jtl75. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:25.025: INFO: Pod daemon-set-jtl75 is not available
+Dec 10 11:22:25.025: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:26.025: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:26.025: INFO: Wrong image for pod: daemon-set-jtl75. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:26.025: INFO: Pod daemon-set-jtl75 is not available
+Dec 10 11:22:26.025: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:27.024: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:27.025: INFO: Wrong image for pod: daemon-set-jtl75. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:27.025: INFO: Pod daemon-set-jtl75 is not available
+Dec 10 11:22:27.025: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:28.024: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:28.024: INFO: Wrong image for pod: daemon-set-jtl75. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:28.024: INFO: Pod daemon-set-jtl75 is not available
+Dec 10 11:22:28.024: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:29.024: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:29.025: INFO: Wrong image for pod: daemon-set-jtl75. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:29.025: INFO: Pod daemon-set-jtl75 is not available
+Dec 10 11:22:29.025: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:30.025: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:30.025: INFO: Wrong image for pod: daemon-set-jtl75. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:30.025: INFO: Pod daemon-set-jtl75 is not available
+Dec 10 11:22:30.025: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:31.025: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:31.025: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:31.025: INFO: Pod daemon-set-pk2n4 is not available
+Dec 10 11:22:32.024: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:32.024: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:32.024: INFO: Pod daemon-set-pk2n4 is not available
+Dec 10 11:22:33.033: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:33.033: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:34.025: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:34.025: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:35.024: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:35.024: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:35.024: INFO: Pod daemon-set-lrr69 is not available
+Dec 10 11:22:36.025: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:36.025: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:36.025: INFO: Pod daemon-set-lrr69 is not available
+Dec 10 11:22:37.025: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:37.025: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:37.025: INFO: Pod daemon-set-lrr69 is not available
+Dec 10 11:22:38.023: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:38.023: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:38.023: INFO: Pod daemon-set-lrr69 is not available
+Dec 10 11:22:39.026: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:39.026: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:39.026: INFO: Pod daemon-set-lrr69 is not available
+Dec 10 11:22:40.024: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:40.024: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:40.024: INFO: Pod daemon-set-lrr69 is not available
+Dec 10 11:22:41.025: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:41.025: INFO: Wrong image for pod: daemon-set-lrr69. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:41.025: INFO: Pod daemon-set-lrr69 is not available
+Dec 10 11:22:42.025: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:42.025: INFO: Pod daemon-set-9kxn9 is not available
+Dec 10 11:22:43.025: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:43.025: INFO: Pod daemon-set-9kxn9 is not available
+Dec 10 11:22:44.026: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:44.026: INFO: Pod daemon-set-9kxn9 is not available
+Dec 10 11:22:45.024: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:46.026: INFO: Wrong image for pod: daemon-set-275nh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
+Dec 10 11:22:46.026: INFO: Pod daemon-set-275nh is not available
+Dec 10 11:22:47.025: INFO: Pod daemon-set-z2xxb is not available
+STEP: Check that daemon pods are still running on every node of the cluster.
+Dec 10 11:22:47.039: INFO: Number of nodes with available pods: 2
+Dec 10 11:22:47.039: INFO: Node dce83 is running more than one daemon pod
+Dec 10 11:22:48.047: INFO: Number of nodes with available pods: 2
+Dec 10 11:22:48.047: INFO: Node dce83 is running more than one daemon pod
+Dec 10 11:22:49.048: INFO: Number of nodes with available pods: 3
+Dec 10 11:22:49.048: INFO: Number of running nodes: 3, number of available pods: 3
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
+STEP: Deleting DaemonSet "daemon-set"
+STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3688, will wait for the garbage collector to delete the pods
+Dec 10 11:22:49.123: INFO: Deleting DaemonSet.extensions daemon-set took: 7.801203ms
+Dec 10 11:22:49.524: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.280396ms
+Dec 10 11:23:01.927: INFO: Number of nodes with available pods: 0
+Dec 10 11:23:01.927: INFO: Number of running nodes: 0, number of available pods: 0
+Dec 10 11:23:01.930: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3688/daemonsets","resourceVersion":"381013"},"items":null}
+
+Dec 10 11:23:01.933: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3688/pods","resourceVersion":"381013"},"items":null}
+
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:23:01.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "daemonsets-3688" for this suite.
+Dec 10 11:23:07.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:23:08.026: INFO: namespace daemonsets-3688 deletion completed in 6.076161547s
+
+• [SLOW TEST:53.232 seconds]
+[sig-apps] Daemon set [Serial]
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Subpath Atomic writer volumes 
+  should support subpaths with downward pod [LinuxOnly] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Subpath
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:23:08.029: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename subpath
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-1161
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] Atomic writer volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
+STEP: Setting up data
+[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating pod pod-subpath-test-downwardapi-g9wg
+STEP: Creating a pod to test atomic-volume-subpath
+Dec 10 11:23:08.189: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-g9wg" in namespace "subpath-1161" to be "success or failure"
+Dec 10 11:23:08.192: INFO: Pod "pod-subpath-test-downwardapi-g9wg": Phase="Pending", Reason="", readiness=false. Elapsed: 3.226304ms
+Dec 10 11:23:10.196: INFO: Pod "pod-subpath-test-downwardapi-g9wg": Phase="Running", Reason="", readiness=true. Elapsed: 2.006873453s
+Dec 10 11:23:12.199: INFO: Pod "pod-subpath-test-downwardapi-g9wg": Phase="Running", Reason="", readiness=true. Elapsed: 4.009914387s
+Dec 10 11:23:14.202: INFO: Pod "pod-subpath-test-downwardapi-g9wg": Phase="Running", Reason="", readiness=true. Elapsed: 6.012644911s
+Dec 10 11:23:16.205: INFO: Pod "pod-subpath-test-downwardapi-g9wg": Phase="Running", Reason="", readiness=true. Elapsed: 8.015838488s
+Dec 10 11:23:18.209: INFO: Pod "pod-subpath-test-downwardapi-g9wg": Phase="Running", Reason="", readiness=true. Elapsed: 10.019617343s
+Dec 10 11:23:20.212: INFO: Pod "pod-subpath-test-downwardapi-g9wg": Phase="Running", Reason="", readiness=true. Elapsed: 12.023137575s
+Dec 10 11:23:22.216: INFO: Pod "pod-subpath-test-downwardapi-g9wg": Phase="Running", Reason="", readiness=true. Elapsed: 14.026775001s
+Dec 10 11:23:24.220: INFO: Pod "pod-subpath-test-downwardapi-g9wg": Phase="Running", Reason="", readiness=true. Elapsed: 16.031264553s
+Dec 10 11:23:26.224: INFO: Pod "pod-subpath-test-downwardapi-g9wg": Phase="Running", Reason="", readiness=true. Elapsed: 18.035295021s
+Dec 10 11:23:28.228: INFO: Pod "pod-subpath-test-downwardapi-g9wg": Phase="Running", Reason="", readiness=true. Elapsed: 20.039515006s
+Dec 10 11:23:30.233: INFO: Pod "pod-subpath-test-downwardapi-g9wg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.043882998s
+STEP: Saw pod success
+Dec 10 11:23:30.233: INFO: Pod "pod-subpath-test-downwardapi-g9wg" satisfied condition "success or failure"
+Dec 10 11:23:30.236: INFO: Trying to get logs from node dce82 pod pod-subpath-test-downwardapi-g9wg container test-container-subpath-downwardapi-g9wg: 
+STEP: delete the pod
+Dec 10 11:23:30.252: INFO: Waiting for pod pod-subpath-test-downwardapi-g9wg to disappear
+Dec 10 11:23:30.255: INFO: Pod pod-subpath-test-downwardapi-g9wg no longer exists
+STEP: Deleting pod pod-subpath-test-downwardapi-g9wg
+Dec 10 11:23:30.255: INFO: Deleting pod "pod-subpath-test-downwardapi-g9wg" in namespace "subpath-1161"
+[AfterEach] [sig-storage] Subpath
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:23:30.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "subpath-1161" for this suite.
+Dec 10 11:23:36.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:23:36.340: INFO: namespace subpath-1161 deletion completed in 6.077036593s
+
+• [SLOW TEST:28.312 seconds]
+[sig-storage] Subpath
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
+  Atomic writer volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
+    should support subpaths with downward pod [LinuxOnly] [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Secrets 
+  should be consumable from pods in env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-api-machinery] Secrets
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:23:36.341: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename secrets
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-360
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating secret with name secret-test-cd6de093-9ab0-412f-b7d0-34df7e65cb4d
+STEP: Creating a pod to test consume secrets
+Dec 10 11:23:36.489: INFO: Waiting up to 5m0s for pod "pod-secrets-b99b4f1c-64b9-46ac-bdde-88d97f3dc12b" in namespace "secrets-360" to be "success or failure"
+Dec 10 11:23:36.491: INFO: Pod "pod-secrets-b99b4f1c-64b9-46ac-bdde-88d97f3dc12b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.984649ms
+Dec 10 11:23:38.493: INFO: Pod "pod-secrets-b99b4f1c-64b9-46ac-bdde-88d97f3dc12b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004211505s
+STEP: Saw pod success
+Dec 10 11:23:38.493: INFO: Pod "pod-secrets-b99b4f1c-64b9-46ac-bdde-88d97f3dc12b" satisfied condition "success or failure"
+Dec 10 11:23:38.495: INFO: Trying to get logs from node dce82 pod pod-secrets-b99b4f1c-64b9-46ac-bdde-88d97f3dc12b container secret-env-test: 
+STEP: delete the pod
+Dec 10 11:23:38.508: INFO: Waiting for pod pod-secrets-b99b4f1c-64b9-46ac-bdde-88d97f3dc12b to disappear
+Dec 10 11:23:38.511: INFO: Pod pod-secrets-b99b4f1c-64b9-46ac-bdde-88d97f3dc12b no longer exists
+[AfterEach] [sig-api-machinery] Secrets
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:23:38.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-360" for this suite.
+Dec 10 11:23:44.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:23:44.599: INFO: namespace secrets-360 deletion completed in 6.084092037s
+
+• [SLOW TEST:8.258 seconds]
+[sig-api-machinery] Secrets
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
+  should be consumable from pods in env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:23:44.599: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9523
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating configMap with name projected-configmap-test-volume-map-a709fea9-5708-4ffe-a026-4707cca64ee4
+STEP: Creating a pod to test consume configMaps
+Dec 10 11:23:44.751: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e79829f0-bb28-41cf-a536-b681fc02aac5" in namespace "projected-9523" to be "success or failure"
+Dec 10 11:23:44.754: INFO: Pod "pod-projected-configmaps-e79829f0-bb28-41cf-a536-b681fc02aac5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.685823ms
+Dec 10 11:23:46.757: INFO: Pod "pod-projected-configmaps-e79829f0-bb28-41cf-a536-b681fc02aac5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005952987s
+STEP: Saw pod success
+Dec 10 11:23:46.757: INFO: Pod "pod-projected-configmaps-e79829f0-bb28-41cf-a536-b681fc02aac5" satisfied condition "success or failure"
+Dec 10 11:23:46.760: INFO: Trying to get logs from node dce82 pod pod-projected-configmaps-e79829f0-bb28-41cf-a536-b681fc02aac5 container projected-configmap-volume-test: 
+STEP: delete the pod
+Dec 10 11:23:46.779: INFO: Waiting for pod pod-projected-configmaps-e79829f0-bb28-41cf-a536-b681fc02aac5 to disappear
+Dec 10 11:23:46.782: INFO: Pod pod-projected-configmaps-e79829f0-bb28-41cf-a536-b681fc02aac5 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:23:46.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-9523" for this suite.
+Dec 10 11:23:52.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:23:52.878: INFO: namespace projected-9523 deletion completed in 6.092846714s
+
+• [SLOW TEST:8.279 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
+  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSS
+------------------------------
+[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
+  should have an terminated reason [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:23:52.879: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename kubelet-test
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-690
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
+[BeforeEach] when scheduling a busybox command that always fails in a pod
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
+[It] should have an terminated reason [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[AfterEach] [k8s.io] Kubelet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:23:57.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubelet-test-690" for this suite.
+Dec 10 11:24:03.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:24:03.142: INFO: namespace kubelet-test-690 deletion completed in 6.088248534s
+
+• [SLOW TEST:10.264 seconds]
+[k8s.io] Kubelet
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  when scheduling a busybox command that always fails in a pod
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
+    should have an terminated reason [NodeConformance] [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:24:03.143: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename configmap
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-2793
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating configMap with name configmap-test-volume-94a02019-c7b1-422c-a445-d4f92762e898
+STEP: Creating a pod to test consume configMaps
+Dec 10 11:24:03.290: INFO: Waiting up to 5m0s for pod "pod-configmaps-8f164e34-5f0f-4ab3-83bc-cab68a6f66f3" in namespace "configmap-2793" to be "success or failure"
+Dec 10 11:24:03.292: INFO: Pod "pod-configmaps-8f164e34-5f0f-4ab3-83bc-cab68a6f66f3": Phase="Pending", Reason="", readiness=false. Elapsed: 1.744796ms
+Dec 10 11:24:05.294: INFO: Pod "pod-configmaps-8f164e34-5f0f-4ab3-83bc-cab68a6f66f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004028604s
+STEP: Saw pod success
+Dec 10 11:24:05.294: INFO: Pod "pod-configmaps-8f164e34-5f0f-4ab3-83bc-cab68a6f66f3" satisfied condition "success or failure"
+Dec 10 11:24:05.295: INFO: Trying to get logs from node dce82 pod pod-configmaps-8f164e34-5f0f-4ab3-83bc-cab68a6f66f3 container configmap-volume-test: 
+STEP: delete the pod
+Dec 10 11:24:05.306: INFO: Waiting for pod pod-configmaps-8f164e34-5f0f-4ab3-83bc-cab68a6f66f3 to disappear
+Dec 10 11:24:05.308: INFO: Pod pod-configmaps-8f164e34-5f0f-4ab3-83bc-cab68a6f66f3 no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:24:05.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-2793" for this suite.
+Dec 10 11:24:11.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:24:11.399: INFO: namespace configmap-2793 deletion completed in 6.088177979s
+
+• [SLOW TEST:8.256 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
+  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:24:11.399: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename emptydir
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-4585
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test emptydir 0644 on node default medium
+Dec 10 11:24:11.547: INFO: Waiting up to 5m0s for pod "pod-248ac5b0-a983-4f1b-a34a-cb43b6feee2b" in namespace "emptydir-4585" to be "success or failure"
+Dec 10 11:24:11.551: INFO: Pod "pod-248ac5b0-a983-4f1b-a34a-cb43b6feee2b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.462114ms
+Dec 10 11:24:13.555: INFO: Pod "pod-248ac5b0-a983-4f1b-a34a-cb43b6feee2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007653838s
+STEP: Saw pod success
+Dec 10 11:24:13.555: INFO: Pod "pod-248ac5b0-a983-4f1b-a34a-cb43b6feee2b" satisfied condition "success or failure"
+Dec 10 11:24:13.558: INFO: Trying to get logs from node dce82 pod pod-248ac5b0-a983-4f1b-a34a-cb43b6feee2b container test-container: 
+STEP: delete the pod
+Dec 10 11:24:13.580: INFO: Waiting for pod pod-248ac5b0-a983-4f1b-a34a-cb43b6feee2b to disappear
+Dec 10 11:24:13.582: INFO: Pod pod-248ac5b0-a983-4f1b-a34a-cb43b6feee2b no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:24:13.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-4585" for this suite.
+Dec 10 11:24:19.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:24:19.668: INFO: namespace emptydir-4585 deletion completed in 6.081825563s
+
+• [SLOW TEST:8.269 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SS
+------------------------------
+[sig-scheduling] SchedulerPredicates [Serial] 
+  validates that NodeSelector is respected if matching  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:24:19.669: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename sched-pred
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-2142
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
+Dec 10 11:24:19.812: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
+Dec 10 11:24:19.820: INFO: Waiting for terminating namespaces to be deleted...
+Dec 10 11:24:19.823: INFO: 
+Logging pods the kubelet thinks is on node dce81 before test
+Dec 10 11:24:19.834: INFO: dce-prometheus-698b884db7-5vrk2 from kube-system started at 2019-12-09 03:02:00 +0000 UTC (1 container statuses recorded)
+Dec 10 11:24:19.834: INFO: 	Container dce-prometheus ready: true, restart count 0
+Dec 10 11:24:19.834: INFO: smokeping-drpdh from kube-system started at 2019-12-08 10:38:00 +0000 UTC (1 container statuses recorded)
+Dec 10 11:24:19.834: INFO: 	Container smokeping ready: true, restart count 1
+Dec 10 11:24:19.834: INFO: calico-node-zj8bt from kube-system started at 2019-12-08 10:37:37 +0000 UTC (2 container statuses recorded)
+Dec 10 11:24:19.834: INFO: 	Container calico-node ready: true, restart count 2
+Dec 10 11:24:19.834: INFO: 	Container install-cni ready: true, restart count 2
+Dec 10 11:24:19.834: INFO: kube-proxy-lc4c7 from kube-system started at 2019-12-08 10:37:38 +0000 UTC (1 container statuses recorded)
+Dec 10 11:24:19.834: INFO: 	Container kube-proxy ready: true, restart count 2
+Dec 10 11:24:19.834: INFO: sonobuoy-systemd-logs-daemon-set-ea02895db0f74bf1-dhl7h from sonobuoy started at 2019-12-10 09:57:15 +0000 UTC (2 container statuses recorded)
+Dec 10 11:24:19.834: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Dec 10 11:24:19.834: INFO: 	Container systemd-logs ready: true, restart count 0
+Dec 10 11:24:19.834: INFO: dce-cloud-provider-manager-rtcrj from kube-system started at 2019-12-08 10:37:37 +0000 UTC (1 container statuses recorded)
+Dec 10 11:24:19.834: INFO: 	Container dce-cloud-provider ready: true, restart count 2
+Dec 10 11:24:19.834: INFO: dce-chart-manager-797958bcff-v2wfh from kube-system started at 2019-12-08 10:38:00 +0000 UTC (1 container statuses recorded)
+Dec 10 11:24:19.834: INFO: 	Container chart-manager ready: true, restart count 1
+Dec 10 11:24:19.834: INFO: calico-kube-controllers-6b7d5ffdd4-x65qw from kube-system started at 2019-12-08 10:38:02 +0000 UTC (1 container statuses recorded)
+Dec 10 11:24:19.834: INFO: 	Container calico-kube-controllers ready: true, restart count 2
+Dec 10 11:24:19.834: INFO: node-local-dns-cv2r5 from kube-system started at 2019-12-10 09:33:54 +0000 UTC (1 container statuses recorded)
+Dec 10 11:24:19.834: INFO: 	Container node-cache ready: true, restart count 0
+Dec 10 11:24:19.834: INFO: 
+Logging pods the kubelet thinks is on node dce82 before test
+Dec 10 11:24:19.845: INFO: sonobuoy-e2e-job-3fef55150259473e from sonobuoy started at 2019-12-10 09:57:15 +0000 UTC (2 container statuses recorded)
+Dec 10 11:24:19.845: INFO: 	Container e2e ready: true, restart count 0
+Dec 10 11:24:19.845: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Dec 10 11:24:19.845: INFO: node-local-dns-jwvds from kube-system started at 2019-12-10 09:33:54 +0000 UTC (1 container statuses recorded)
+Dec 10 11:24:19.845: INFO: 	Container node-cache ready: true, restart count 0
+Dec 10 11:24:19.845: INFO: calico-node-6bfc2 from kube-system started at 2019-12-09 02:46:32 +0000 UTC (2 container statuses recorded)
+Dec 10 11:24:19.845: INFO: 	Container calico-node ready: true, restart count 1
+Dec 10 11:24:19.845: INFO: 	Container install-cni ready: true, restart count 1
+Dec 10 11:24:19.845: INFO: kube-proxy-gdkmh from kube-system started at 2019-12-09 02:46:32 +0000 UTC (1 container statuses recorded)
+Dec 10 11:24:19.845: INFO: 	Container kube-proxy ready: true, restart count 2
+Dec 10 11:24:19.845: INFO: sonobuoy from sonobuoy started at 2019-12-10 09:57:13 +0000 UTC (1 container statuses recorded)
+Dec 10 11:24:19.845: INFO: 	Container kube-sonobuoy ready: true, restart count 0
+Dec 10 11:24:19.845: INFO: coredns-56b78b5b9c-vvgnk from kube-system started at 2019-12-10 09:38:10 +0000 UTC (1 container statuses recorded)
+Dec 10 11:24:19.845: INFO: 	Container coredns ready: true, restart count 0
+Dec 10 11:24:19.845: INFO: smokeping-jw5wv from kube-system started at 2019-12-09 02:46:32 +0000 UTC (1 container statuses recorded)
+Dec 10 11:24:19.845: INFO: 	Container smokeping ready: true, restart count 2
+Dec 10 11:24:19.845: INFO: sonobuoy-systemd-logs-daemon-set-ea02895db0f74bf1-vczr4 from sonobuoy started at 2019-12-10 09:57:15 +0000 UTC (2 container statuses recorded)
+Dec 10 11:24:19.845: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Dec 10 11:24:19.845: INFO: 	Container systemd-logs ready: true, restart count 0
+Dec 10 11:24:19.845: INFO: dce-system-dnsservice-868586b8dd-glqkf from dce-system started at 2019-12-10 09:28:56 +0000 UTC (1 container statuses recorded)
+Dec 10 11:24:19.845: INFO: 	Container dce-system-dnsservice ready: true, restart count 0
+Dec 10 11:24:19.845: INFO: 
+Logging pods the kubelet thinks is on node dce83 before test
+Dec 10 11:24:19.855: INFO: smokeping-xkkch from kube-system started at 2019-12-09 02:46:26 +0000 UTC (1 container statuses recorded)
+Dec 10 11:24:19.855: INFO: 	Container smokeping ready: true, restart count 5
+Dec 10 11:24:19.855: INFO: kube-proxy-g25r8 from kube-system started at 2019-12-09 02:46:26 +0000 UTC (1 container statuses recorded)
+Dec 10 11:24:19.855: INFO: 	Container kube-proxy ready: true, restart count 5
+Dec 10 11:24:19.855: INFO: coredns-coredns-7d54967c97-22wrr from kube-system started at 2019-12-09 02:58:50 +0000 UTC (1 container statuses recorded)
+Dec 10 11:24:19.855: INFO: 	Container coredns ready: true, restart count 5
+Dec 10 11:24:19.855: INFO: node-local-dns-mqqrp from kube-system started at 2019-12-10 09:33:54 +0000 UTC (1 container statuses recorded)
+Dec 10 11:24:19.855: INFO: 	Container node-cache ready: true, restart count 0
+Dec 10 11:24:19.855: INFO: calico-node-856tw from kube-system started at 2019-12-09 02:46:26 +0000 UTC (2 container statuses recorded)
+Dec 10 11:24:19.855: INFO: 	Container calico-node ready: true, restart count 3
+Dec 10 11:24:19.855: INFO: 	Container install-cni ready: true, restart count 3
+Dec 10 11:24:19.855: INFO: coredns-56b78b5b9c-629w2 from kube-system started at 2019-12-10 09:38:10 +0000 UTC (1 container statuses recorded)
+Dec 10 11:24:19.855: INFO: 	Container coredns ready: true, restart count 0
+Dec 10 11:24:19.855: INFO: sonobuoy-systemd-logs-daemon-set-ea02895db0f74bf1-9bdkz from sonobuoy started at 2019-12-10 09:57:15 +0000 UTC (2 container statuses recorded)
+Dec 10 11:24:19.855: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Dec 10 11:24:19.855: INFO: 	Container systemd-logs ready: true, restart count 0
+[It] validates that NodeSelector is respected if matching  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Trying to launch a pod without a label to get a node which can launch it.
+STEP: Explicitly delete pod here to free the resource it takes.
+STEP: Trying to apply a random label on the found node.
+STEP: verifying the node has the label kubernetes.io/e2e-72a63380-9d78-4ccd-aafc-8710cee37c39 42
+STEP: Trying to relaunch the pod, now with labels.
+STEP: removing the label kubernetes.io/e2e-72a63380-9d78-4ccd-aafc-8710cee37c39 off the node dce82
+STEP: verifying the node doesn't have the label kubernetes.io/e2e-72a63380-9d78-4ccd-aafc-8710cee37c39
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:24:23.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "sched-pred-2142" for this suite.
+Dec 10 11:24:51.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:24:51.999: INFO: namespace sched-pred-2142 deletion completed in 28.085410946s
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72
+
+• [SLOW TEST:32.330 seconds]
+[sig-scheduling] SchedulerPredicates [Serial]
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
+  validates that NodeSelector is respected if matching  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+S
+------------------------------
+[sig-network] Services 
+  should serve a basic endpoint from pods  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:24:51.999: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename services
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-1032
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
+[It] should serve a basic endpoint from pods  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: creating service endpoint-test2 in namespace services-1032
+STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1032 to expose endpoints map[]
+Dec 10 11:24:52.151: INFO: Get endpoints failed (3.377199ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
+Dec 10 11:24:53.155: INFO: successfully validated that service endpoint-test2 in namespace services-1032 exposes endpoints map[] (1.007444385s elapsed)
+STEP: Creating pod pod1 in namespace services-1032
+STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1032 to expose endpoints map[pod1:[80]]
+Dec 10 11:24:56.187: INFO: successfully validated that service endpoint-test2 in namespace services-1032 exposes endpoints map[pod1:[80]] (3.027396469s elapsed)
+STEP: Creating pod pod2 in namespace services-1032
+STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1032 to expose endpoints map[pod1:[80] pod2:[80]]
+Dec 10 11:24:59.224: INFO: successfully validated that service endpoint-test2 in namespace services-1032 exposes endpoints map[pod1:[80] pod2:[80]] (3.031799336s elapsed)
+STEP: Deleting pod pod1 in namespace services-1032
+STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1032 to expose endpoints map[pod2:[80]]
+Dec 10 11:25:00.246: INFO: successfully validated that service endpoint-test2 in namespace services-1032 exposes endpoints map[pod2:[80]] (1.015722936s elapsed)
+STEP: Deleting pod pod2 in namespace services-1032
+STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1032 to expose endpoints map[]
+Dec 10 11:25:01.258: INFO: successfully validated that service endpoint-test2 in namespace services-1032 exposes endpoints map[] (1.006127698s elapsed)
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:25:01.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "services-1032" for this suite.
+Dec 10 11:25:23.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:25:23.349: INFO: namespace services-1032 deletion completed in 22.078424427s
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92
+
+• [SLOW TEST:31.350 seconds]
+[sig-network] Services
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should serve a basic endpoint from pods  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Watchers 
+  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:25:23.349: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename watch
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-975
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: creating a watch on configmaps
+STEP: creating a new configmap
+STEP: modifying the configmap once
+STEP: closing the watch once it receives two notifications
+Dec 10 11:25:23.505: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-975,SelfLink:/api/v1/namespaces/watch-975/configmaps/e2e-watch-test-watch-closed,UID:17f9e795-ccd3-401d-b05e-3feb95898cda,ResourceVersion:381707,Generation:0,CreationTimestamp:2019-12-10 11:25:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
+Dec 10 11:25:23.505: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-975,SelfLink:/api/v1/namespaces/watch-975/configmaps/e2e-watch-test-watch-closed,UID:17f9e795-ccd3-401d-b05e-3feb95898cda,ResourceVersion:381708,Generation:0,CreationTimestamp:2019-12-10 11:25:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
+STEP: modifying the configmap a second time, while the watch is closed
+STEP: creating a new watch on configmaps from the last resource version observed by the first watch
+STEP: deleting the configmap
+STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
+Dec 10 11:25:23.514: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-975,SelfLink:/api/v1/namespaces/watch-975/configmaps/e2e-watch-test-watch-closed,UID:17f9e795-ccd3-401d-b05e-3feb95898cda,ResourceVersion:381709,Generation:0,CreationTimestamp:2019-12-10 11:25:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
+Dec 10 11:25:23.514: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-975,SelfLink:/api/v1/namespaces/watch-975/configmaps/e2e-watch-test-watch-closed,UID:17f9e795-ccd3-401d-b05e-3feb95898cda,ResourceVersion:381710,Generation:0,CreationTimestamp:2019-12-10 11:25:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
+[AfterEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:25:23.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "watch-975" for this suite.
+Dec 10 11:25:29.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:25:29.611: INFO: namespace watch-975 deletion completed in 6.093329432s
+
+• [SLOW TEST:6.262 seconds]
+[sig-api-machinery] Watchers
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSS
+------------------------------
+[sig-node] Downward API 
+  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-node] Downward API
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:25:29.611: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename downward-api
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-2969
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test downward api env vars
+Dec 10 11:25:29.763: INFO: Waiting up to 5m0s for pod "downward-api-1559ceb1-de33-4bff-8a15-a7ae34f3fec0" in namespace "downward-api-2969" to be "success or failure"
+Dec 10 11:25:29.766: INFO: Pod "downward-api-1559ceb1-de33-4bff-8a15-a7ae34f3fec0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.925767ms
+Dec 10 11:25:31.770: INFO: Pod "downward-api-1559ceb1-de33-4bff-8a15-a7ae34f3fec0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007284855s
+Dec 10 11:25:33.775: INFO: Pod "downward-api-1559ceb1-de33-4bff-8a15-a7ae34f3fec0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011712987s
+STEP: Saw pod success
+Dec 10 11:25:33.775: INFO: Pod "downward-api-1559ceb1-de33-4bff-8a15-a7ae34f3fec0" satisfied condition "success or failure"
+Dec 10 11:25:33.778: INFO: Trying to get logs from node dce82 pod downward-api-1559ceb1-de33-4bff-8a15-a7ae34f3fec0 container dapi-container: 
+STEP: delete the pod
+Dec 10 11:25:33.792: INFO: Waiting for pod downward-api-1559ceb1-de33-4bff-8a15-a7ae34f3fec0 to disappear
+Dec 10 11:25:33.794: INFO: Pod downward-api-1559ceb1-de33-4bff-8a15-a7ae34f3fec0 no longer exists
+[AfterEach] [sig-node] Downward API
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:25:33.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-2969" for this suite.
+Dec 10 11:25:39.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:25:39.878: INFO: namespace downward-api-2969 deletion completed in 6.081050794s
+
+• [SLOW TEST:10.267 seconds]
+[sig-node] Downward API
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
+  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Pods 
+  should get a host IP [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:25:39.878: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename pods
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-4557
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
+[It] should get a host IP [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: creating pod
+Dec 10 11:25:44.034: INFO: Pod pod-hostip-d63f96a0-f17a-400e-8a68-3699ba600c9f has hostIP: 10.6.135.82
+[AfterEach] [k8s.io] Pods
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:25:44.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pods-4557" for this suite.
+Dec 10 11:26:06.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:26:06.122: INFO: namespace pods-4557 deletion completed in 22.083780826s
+
+• [SLOW TEST:26.244 seconds]
+[k8s.io] Pods
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  should get a host IP [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+S
+------------------------------
+[sig-storage] Downward API volume 
+  should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:26:06.122: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename downward-api
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-4272
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
+[It] should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating the pod
+Dec 10 11:26:08.860: INFO: Successfully updated pod "annotationupdatee2ff6a9e-66a2-4598-b921-0a8a673c0808"
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:26:12.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-4272" for this suite.
+Dec 10 11:26:34.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:26:34.977: INFO: namespace downward-api-4272 deletion completed in 22.085966418s
+
+• [SLOW TEST:28.855 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
+  should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
+  creating/deleting custom resource definition objects works  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:26:34.978: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename custom-resource-definition
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-5286
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] creating/deleting custom resource definition objects works  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+Dec 10 11:26:35.116: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:26:36.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "custom-resource-definition-5286" for this suite.
+Dec 10 11:26:42.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:26:42.414: INFO: namespace custom-resource-definition-5286 deletion completed in 6.088796889s
+
+• [SLOW TEST:7.436 seconds]
+[sig-api-machinery] CustomResourceDefinition resources
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  Simple CustomResourceDefinition
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
+    creating/deleting custom resource definition objects works  [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected secret 
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Projected secret
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:26:42.415: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5759
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating projection with secret that has name projected-secret-test-a0a2517f-9f0e-49ee-8fe8-1f3b0e682919
+STEP: Creating a pod to test consume secrets
+Dec 10 11:26:42.564: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e0cd49f5-c5c6-48ef-a910-d4edad69de13" in namespace "projected-5759" to be "success or failure"
+Dec 10 11:26:42.567: INFO: Pod "pod-projected-secrets-e0cd49f5-c5c6-48ef-a910-d4edad69de13": Phase="Pending", Reason="", readiness=false. Elapsed: 3.150236ms
+Dec 10 11:26:44.571: INFO: Pod "pod-projected-secrets-e0cd49f5-c5c6-48ef-a910-d4edad69de13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007134878s
+STEP: Saw pod success
+Dec 10 11:26:44.571: INFO: Pod "pod-projected-secrets-e0cd49f5-c5c6-48ef-a910-d4edad69de13" satisfied condition "success or failure"
+Dec 10 11:26:44.575: INFO: Trying to get logs from node dce82 pod pod-projected-secrets-e0cd49f5-c5c6-48ef-a910-d4edad69de13 container projected-secret-volume-test: 
+STEP: delete the pod
+Dec 10 11:26:44.591: INFO: Waiting for pod pod-projected-secrets-e0cd49f5-c5c6-48ef-a910-d4edad69de13 to disappear
+Dec 10 11:26:44.592: INFO: Pod pod-projected-secrets-e0cd49f5-c5c6-48ef-a910-d4edad69de13 no longer exists
+[AfterEach] [sig-storage] Projected secret
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:26:44.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-5759" for this suite.
+Dec 10 11:26:50.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:26:50.674: INFO: namespace projected-5759 deletion completed in 6.078953807s
+
+• [SLOW TEST:8.259 seconds]
+[sig-storage] Projected secret
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSS
+------------------------------
+[sig-node] Downward API 
+  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-node] Downward API
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:26:50.674: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename downward-api
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-2586
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test downward api env vars
+Dec 10 11:26:50.815: INFO: Waiting up to 5m0s for pod "downward-api-c9139dd6-0094-434a-88c4-a3a2562cfbd8" in namespace "downward-api-2586" to be "success or failure"
+Dec 10 11:26:50.818: INFO: Pod "downward-api-c9139dd6-0094-434a-88c4-a3a2562cfbd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.517794ms
+Dec 10 11:26:52.821: INFO: Pod "downward-api-c9139dd6-0094-434a-88c4-a3a2562cfbd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006079117s
+STEP: Saw pod success
+Dec 10 11:26:52.821: INFO: Pod "downward-api-c9139dd6-0094-434a-88c4-a3a2562cfbd8" satisfied condition "success or failure"
+Dec 10 11:26:52.824: INFO: Trying to get logs from node dce82 pod downward-api-c9139dd6-0094-434a-88c4-a3a2562cfbd8 container dapi-container: 
+STEP: delete the pod
+Dec 10 11:26:52.843: INFO: Waiting for pod downward-api-c9139dd6-0094-434a-88c4-a3a2562cfbd8 to disappear
+Dec 10 11:26:52.845: INFO: Pod downward-api-c9139dd6-0094-434a-88c4-a3a2562cfbd8 no longer exists
+[AfterEach] [sig-node] Downward API
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:26:52.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-2586" for this suite.
+Dec 10 11:26:58.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:26:58.943: INFO: namespace downward-api-2586 deletion completed in 6.093760039s
+
+• [SLOW TEST:8.269 seconds]
+[sig-node] Downward API
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
+  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
+  should check if v1 is in available api versions  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:26:58.943: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename kubectl
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-643
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
+[It] should check if v1 is in available api versions  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: validating api versions
+Dec 10 11:26:59.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 api-versions'
+Dec 10 11:26:59.184: INFO: stderr: ""
+Dec 10 11:26:59.184: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\nbatch/v2alpha1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndce.daocloud.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:26:59.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-643" for this suite.
+Dec 10 11:27:05.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:27:05.280: INFO: namespace kubectl-643 deletion completed in 6.092105085s
+
+• [SLOW TEST:6.337 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Kubectl api-versions
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+    should check if v1 is in available api versions  [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+[sig-auth] ServiceAccounts 
+  should mount an API token into pods  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-auth] ServiceAccounts
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:27:05.280: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename svcaccounts
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-6748
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should mount an API token into pods  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: getting the auto-created API token
+STEP: reading a file in the container
+Dec 10 11:27:07.944: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6748 pod-service-account-a2c824cd-7236-4594-aecd-8b32929002af -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
+STEP: reading a file in the container
+Dec 10 11:27:08.140: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6748 pod-service-account-a2c824cd-7236-4594-aecd-8b32929002af -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
+STEP: reading a file in the container
+Dec 10 11:27:08.345: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6748 pod-service-account-a2c824cd-7236-4594-aecd-8b32929002af -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
+[AfterEach] [sig-auth] ServiceAccounts
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:27:08.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "svcaccounts-6748" for this suite.
+Dec 10 11:27:14.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:27:14.648: INFO: namespace svcaccounts-6748 deletion completed in 6.088758995s
+
+• [SLOW TEST:9.368 seconds]
+[sig-auth] ServiceAccounts
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
+  should mount an API token into pods  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSS
+------------------------------
+[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
+  Burst scaling should run to completion even with unhealthy pods [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:27:14.649: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename statefulset
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-7568
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
+[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
+STEP: Creating service test in namespace statefulset-7568
+[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating stateful set ss in namespace statefulset-7568
+STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7568
+Dec 10 11:27:14.816: INFO: Found 0 stateful pods, waiting for 1
+Dec 10 11:27:24.819: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
+STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
+Dec 10 11:27:24.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 exec --namespace=statefulset-7568 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
+Dec 10 11:27:25.046: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
+Dec 10 11:27:25.046: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
+Dec 10 11:27:25.046: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
+
+Dec 10 11:27:25.050: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
+Dec 10 11:27:35.053: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
+Dec 10 11:27:35.053: INFO: Waiting for statefulset status.replicas updated to 0
+Dec 10 11:27:35.067: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
+Dec 10 11:27:35.067: INFO: ss-0  dce82  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:14 +0000 UTC  }]
+Dec 10 11:27:35.067: INFO: 
+Dec 10 11:27:35.067: INFO: StatefulSet ss has not reached scale 3, at 1
+Dec 10 11:27:36.072: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995828855s
+Dec 10 11:27:37.094: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991112585s
+Dec 10 11:27:38.098: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.969084514s
+Dec 10 11:27:39.101: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.96519834s
+Dec 10 11:27:40.106: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.961437604s
+Dec 10 11:27:41.109: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.956732336s
+Dec 10 11:27:42.114: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.953375578s
+Dec 10 11:27:43.118: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.948893704s
+Dec 10 11:27:44.121: INFO: Verifying statefulset ss doesn't scale past 3 for another 945.254893ms
+STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7568
+Dec 10 11:27:45.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 exec --namespace=statefulset-7568 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Dec 10 11:27:45.345: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
+Dec 10 11:27:45.345: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
+Dec 10 11:27:45.345: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'
+
+Dec 10 11:27:45.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 exec --namespace=statefulset-7568 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Dec 10 11:27:45.563: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
+Dec 10 11:27:45.563: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
+Dec 10 11:27:45.563: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'
+
+Dec 10 11:27:45.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 exec --namespace=statefulset-7568 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Dec 10 11:27:45.788: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
+Dec 10 11:27:45.788: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
+Dec 10 11:27:45.788: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'
+
+Dec 10 11:27:45.791: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
+Dec 10 11:27:55.795: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
+Dec 10 11:27:55.795: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
+Dec 10 11:27:55.795: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
+STEP: Scale down will not halt with unhealthy stateful pod
+Dec 10 11:27:55.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 exec --namespace=statefulset-7568 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
+Dec 10 11:27:56.015: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
+Dec 10 11:27:56.015: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
+Dec 10 11:27:56.015: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
+
+Dec 10 11:27:56.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 exec --namespace=statefulset-7568 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
+Dec 10 11:27:56.234: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
+Dec 10 11:27:56.234: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
+Dec 10 11:27:56.234: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
+
+Dec 10 11:27:56.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-845205613 exec --namespace=statefulset-7568 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
+Dec 10 11:27:56.448: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
+Dec 10 11:27:56.448: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
+Dec 10 11:27:56.448: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
+
+Dec 10 11:27:56.448: INFO: Waiting for statefulset status.replicas updated to 0
+Dec 10 11:27:56.528: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
+Dec 10 11:28:06.547: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
+Dec 10 11:28:06.547: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
+Dec 10 11:28:06.547: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
+Dec 10 11:28:06.555: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
+Dec 10 11:28:06.555: INFO: ss-0  dce82  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:14 +0000 UTC  }]
+Dec 10 11:28:06.555: INFO: ss-1  dce83  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:35 +0000 UTC  }]
+Dec 10 11:28:06.555: INFO: ss-2  dce81  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:35 +0000 UTC  }]
+Dec 10 11:28:06.555: INFO: 
+Dec 10 11:28:06.555: INFO: StatefulSet ss has not reached scale 0, at 3
+Dec 10 11:28:07.560: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
+Dec 10 11:28:07.560: INFO: ss-0  dce82  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:14 +0000 UTC  }]
+Dec 10 11:28:07.560: INFO: ss-1  dce83  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:35 +0000 UTC  }]
+Dec 10 11:28:07.560: INFO: ss-2  dce81  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:35 +0000 UTC  }]
+Dec 10 11:28:07.560: INFO: 
+Dec 10 11:28:07.560: INFO: StatefulSet ss has not reached scale 0, at 3
+Dec 10 11:28:08.565: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
+Dec 10 11:28:08.565: INFO: ss-0  dce82  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:14 +0000 UTC  }]
+Dec 10 11:28:08.565: INFO: ss-1  dce83  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:35 +0000 UTC  }]
+Dec 10 11:28:08.565: INFO: ss-2  dce81  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:35 +0000 UTC  }]
+Dec 10 11:28:08.565: INFO: 
+Dec 10 11:28:08.565: INFO: StatefulSet ss has not reached scale 0, at 3
+Dec 10 11:28:09.570: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
+Dec 10 11:28:09.570: INFO: ss-0  dce82  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:14 +0000 UTC  }]
+Dec 10 11:28:09.570: INFO: ss-2  dce81  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:35 +0000 UTC  }]
+Dec 10 11:28:09.570: INFO: 
+Dec 10 11:28:09.570: INFO: StatefulSet ss has not reached scale 0, at 2
+Dec 10 11:28:10.587: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
+Dec 10 11:28:10.587: INFO: ss-0  dce82  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:14 +0000 UTC  }]
+Dec 10 11:28:10.587: INFO: ss-2  dce81  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:35 +0000 UTC  }]
+Dec 10 11:28:10.587: INFO: 
+Dec 10 11:28:10.587: INFO: StatefulSet ss has not reached scale 0, at 2
+Dec 10 11:28:11.590: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
+Dec 10 11:28:11.590: INFO: ss-0  dce82  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:27:14 +0000 UTC  }]
+Dec 10 11:28:11.590: INFO: 
+Dec 10 11:28:11.590: INFO: StatefulSet ss has not reached scale 0, at 1
+Dec 10 11:28:12.593: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.962636002s
+Dec 10 11:28:13.596: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.960037788s
+Dec 10 11:28:14.599: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.956861311s
+Dec 10 11:28:15.602: INFO: Verifying statefulset ss doesn't scale past 0 for another 953.829849ms
+STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7568
+Dec 10 11:28:16.607: INFO: Scaling statefulset ss to 0
+Dec 10 11:28:16.616: INFO: Waiting for statefulset status.replicas updated to 0
+[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
+Dec 10 11:28:16.619: INFO: Deleting all statefulset in ns statefulset-7568
+Dec 10 11:28:16.622: INFO: Scaling statefulset ss to 0
+Dec 10 11:28:16.632: INFO: Waiting for statefulset status.replicas updated to 0
+Dec 10 11:28:16.635: INFO: Deleting statefulset ss
+[AfterEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:28:16.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "statefulset-7568" for this suite.
+Dec 10 11:28:22.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:28:22.732: INFO: namespace statefulset-7568 deletion completed in 6.082773906s
+
+• [SLOW TEST:68.083 seconds]
+[sig-apps] StatefulSet
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+    Burst scaling should run to completion even with unhealthy pods [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSS
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should delete pods created by rc when not orphaning [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:28:22.732: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename gc
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-6175
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should delete pods created by rc when not orphaning [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: create the rc
+STEP: delete the rc
+STEP: wait for all pods to be garbage collected
+STEP: Gathering metrics
+W1210 11:28:32.897331      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
+Dec 10 11:28:32.897: INFO: For apiserver_request_total:
+For apiserver_request_latencies_summary:
+For apiserver_init_events_total:
+For garbage_collector_attempt_to_delete_queue_latency:
+For garbage_collector_attempt_to_delete_work_duration:
+For garbage_collector_attempt_to_orphan_queue_latency:
+For garbage_collector_attempt_to_orphan_work_duration:
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+For garbage_collector_graph_changes_work_duration:
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+For namespace_queue_latency_sum:
+For namespace_queue_latency_count:
+For namespace_retries:
+For namespace_work_duration:
+For namespace_work_duration_sum:
+For namespace_work_duration_count:
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:28:32.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "gc-6175" for this suite.
+Dec 10 11:28:38.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:28:38.999: INFO: namespace gc-6175 deletion completed in 6.099566012s
+
+• [SLOW TEST:16.267 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should delete pods created by rc when not orphaning [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide container's memory request [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:28:39.001: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename downward-api
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-2936
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
+[It] should provide container's memory request [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test downward API volume plugin
+Dec 10 11:28:39.151: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f1eb3c44-4af9-4ecc-9cc0-5cf8257625ae" in namespace "downward-api-2936" to be "success or failure"
+Dec 10 11:28:39.153: INFO: Pod "downwardapi-volume-f1eb3c44-4af9-4ecc-9cc0-5cf8257625ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286555ms
+Dec 10 11:28:41.156: INFO: Pod "downwardapi-volume-f1eb3c44-4af9-4ecc-9cc0-5cf8257625ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005114038s
+STEP: Saw pod success
+Dec 10 11:28:41.156: INFO: Pod "downwardapi-volume-f1eb3c44-4af9-4ecc-9cc0-5cf8257625ae" satisfied condition "success or failure"
+Dec 10 11:28:41.159: INFO: Trying to get logs from node dce82 pod downwardapi-volume-f1eb3c44-4af9-4ecc-9cc0-5cf8257625ae container client-container: 
+STEP: delete the pod
+Dec 10 11:28:41.180: INFO: Waiting for pod downwardapi-volume-f1eb3c44-4af9-4ecc-9cc0-5cf8257625ae to disappear
+Dec 10 11:28:41.182: INFO: Pod downwardapi-volume-f1eb3c44-4af9-4ecc-9cc0-5cf8257625ae no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:28:41.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-2936" for this suite.
+Dec 10 11:28:47.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:28:47.301: INFO: namespace downward-api-2936 deletion completed in 6.117348646s
+
+• [SLOW TEST:8.301 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
+  should provide container's memory request [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] Daemon set [Serial] 
+  should retry creating failed daemon pods [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:28:47.302: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename daemonsets
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-9369
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
+[It] should retry creating failed daemon pods [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a simple DaemonSet "daemon-set"
+STEP: Check that daemon pods launch on every node of the cluster.
+Dec 10 11:28:47.482: INFO: Number of nodes with available pods: 0
+Dec 10 11:28:47.482: INFO: Node dce81 is running more than one daemon pod
+Dec 10 11:28:48.491: INFO: Number of nodes with available pods: 0
+Dec 10 11:28:48.491: INFO: Node dce81 is running more than one daemon pod
+Dec 10 11:28:49.489: INFO: Number of nodes with available pods: 1
+Dec 10 11:28:49.489: INFO: Node dce81 is running more than one daemon pod
+Dec 10 11:28:50.489: INFO: Number of nodes with available pods: 3
+Dec 10 11:28:50.489: INFO: Number of running nodes: 3, number of available pods: 3
+STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
+Dec 10 11:28:50.503: INFO: Number of nodes with available pods: 2
+Dec 10 11:28:50.503: INFO: Node dce83 is running more than one daemon pod
+Dec 10 11:28:51.510: INFO: Number of nodes with available pods: 2
+Dec 10 11:28:51.510: INFO: Node dce83 is running more than one daemon pod
+Dec 10 11:28:52.512: INFO: Number of nodes with available pods: 2
+Dec 10 11:28:52.512: INFO: Node dce83 is running more than one daemon pod
+Dec 10 11:28:53.509: INFO: Number of nodes with available pods: 3
+Dec 10 11:28:53.509: INFO: Number of running nodes: 3, number of available pods: 3
+STEP: Wait for the failed daemon pod to be completely deleted.
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
+STEP: Deleting DaemonSet "daemon-set"
+STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9369, will wait for the garbage collector to delete the pods
+Dec 10 11:28:53.575: INFO: Deleting DaemonSet.extensions daemon-set took: 9.363906ms
+Dec 10 11:28:53.975: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.24632ms
+Dec 10 11:29:06.678: INFO: Number of nodes with available pods: 0
+Dec 10 11:29:06.678: INFO: Number of running nodes: 0, number of available pods: 0
+Dec 10 11:29:06.681: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9369/daemonsets","resourceVersion":"382817"},"items":null}
+
+Dec 10 11:29:06.683: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9369/pods","resourceVersion":"382817"},"items":null}
+
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:29:06.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "daemonsets-9369" for this suite.
+Dec 10 11:29:12.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:29:12.786: INFO: namespace daemonsets-9369 deletion completed in 6.089127929s
+
+• [SLOW TEST:25.484 seconds]
+[sig-apps] Daemon set [Serial]
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should retry creating failed daemon pods [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:29:12.786: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename configmap
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-3094
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating configMap with name configmap-test-upd-2acd068a-a489-455f-afd9-5e198f806a09
+STEP: Creating the pod
+STEP: Updating configmap configmap-test-upd-2acd068a-a489-455f-afd9-5e198f806a09
+STEP: waiting to observe update in volume
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:29:17.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-3094" for this suite.
+Dec 10 11:29:39.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:29:39.128: INFO: namespace configmap-3094 deletion completed in 22.083178609s
+
+• [SLOW TEST:26.342 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
+  updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] InitContainer [NodeConformance] 
+  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:29:39.129: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename init-container
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-9441
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
+[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: creating the pod
+Dec 10 11:29:39.279: INFO: PodSpec: initContainers in spec.initContainers
+[AfterEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:29:42.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "init-container-9441" for this suite.
+Dec 10 11:29:48.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:29:48.298: INFO: namespace init-container-9441 deletion completed in 6.079587938s
+
+• [SLOW TEST:9.170 seconds]
+[k8s.io] InitContainer [NodeConformance]
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:29:48.298: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename configmap
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-8573
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating configMap with name configmap-test-volume-map-80ba9f9b-493b-4142-80ec-2fd3494d52b4
+STEP: Creating a pod to test consume configMaps
+Dec 10 11:29:48.452: INFO: Waiting up to 5m0s for pod "pod-configmaps-5d03c94e-016e-45bd-95ec-722cd8eb1e03" in namespace "configmap-8573" to be "success or failure"
+Dec 10 11:29:48.455: INFO: Pod "pod-configmaps-5d03c94e-016e-45bd-95ec-722cd8eb1e03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.467568ms
+Dec 10 11:29:50.458: INFO: Pod "pod-configmaps-5d03c94e-016e-45bd-95ec-722cd8eb1e03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005636739s
+STEP: Saw pod success
+Dec 10 11:29:50.458: INFO: Pod "pod-configmaps-5d03c94e-016e-45bd-95ec-722cd8eb1e03" satisfied condition "success or failure"
+Dec 10 11:29:50.461: INFO: Trying to get logs from node dce82 pod pod-configmaps-5d03c94e-016e-45bd-95ec-722cd8eb1e03 container configmap-volume-test: 
+STEP: delete the pod
+Dec 10 11:29:50.478: INFO: Waiting for pod pod-configmaps-5d03c94e-016e-45bd-95ec-722cd8eb1e03 to disappear
+Dec 10 11:29:50.481: INFO: Pod pod-configmaps-5d03c94e-016e-45bd-95ec-722cd8eb1e03 no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:29:50.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-8573" for this suite.
+Dec 10 11:29:56.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:29:56.583: INFO: namespace configmap-8573 deletion completed in 6.098206828s
+
+• [SLOW TEST:8.284 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
+  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] Deployment 
+  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:29:56.583: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename deployment
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-225
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
+[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+Dec 10 11:29:56.725: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
+Dec 10 11:29:56.737: INFO: Pod name sample-pod: Found 0 pods out of 1
+Dec 10 11:30:01.752: INFO: Pod name sample-pod: Found 1 pods out of 1
+STEP: ensuring each pod is running
+Dec 10 11:30:01.752: INFO: Creating deployment "test-rolling-update-deployment"
+Dec 10 11:30:01.756: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
+Dec 10 11:30:01.761: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
+Dec 10 11:30:03.767: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
+Dec 10 11:30:03.770: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
+Dec 10 11:30:03.779: INFO: Deployment "test-rolling-update-deployment":
+&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-225,SelfLink:/apis/apps/v1/namespaces/deployment-225/deployments/test-rolling-update-deployment,UID:c4c2c9a1-aed6-4245-8acb-4f9e69bf65df,ResourceVersion:383140,Generation:1,CreationTimestamp:2019-12-10 11:30:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-10 11:30:01 +0000 UTC 2019-12-10 11:30:01 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-10 11:30:03 +0000 UTC 2019-12-10 11:30:01 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}
+
+Dec 10 11:30:03.782: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
+&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-225,SelfLink:/apis/apps/v1/namespaces/deployment-225/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:4b40a43c-4153-45fe-b8d1-0adc87873de3,ResourceVersion:383129,Generation:1,CreationTimestamp:2019-12-10 11:30:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment c4c2c9a1-aed6-4245-8acb-4f9e69bf65df 0xc003267bb7 0xc003267bb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
+Dec 10 11:30:03.782: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
+Dec 10 11:30:03.782: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-225,SelfLink:/apis/apps/v1/namespaces/deployment-225/replicasets/test-rolling-update-controller,UID:e9ad1061-765c-45e1-bb0f-7faecc6902a3,ResourceVersion:383138,Generation:2,CreationTimestamp:2019-12-10 11:29:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment c4c2c9a1-aed6-4245-8acb-4f9e69bf65df 0xc003267a77 0xc003267a78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
+Dec 10 11:30:03.789: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-w799z" is available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-w799z,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-225,SelfLink:/api/v1/namespaces/deployment-225/pods/test-rolling-update-deployment-79f6b9d75c-w799z,UID:5be3c02d-a0de-4bbf-b83c-2122dd484203,ResourceVersion:383128,Generation:0,CreationTimestamp:2019-12-10 11:30:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{kubernetes.io/psp: dce-psp-allow-all,},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 4b40a43c-4153-45fe-b8d1-0adc87873de3 0xc003ba0367 0xc003ba0368}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5nltx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5nltx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-5nltx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:dce82,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003ba03e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003ba0400}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:30:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:30:03 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:30:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-10 11:30:01 +0000 UTC  }],Message:,Reason:,HostIP:10.6.135.82,PodIP:172.28.8.80,StartTime:2019-12-10 11:30:01 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-10 11:30:03 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://c6724a95d2459902115c780c73bdaafd3816d9d1d07e21d0dc5cac675a846b0d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:30:03.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "deployment-225" for this suite.
+Dec 10 11:30:09.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:30:09.884: INFO: namespace deployment-225 deletion completed in 6.092139942s
+
+• [SLOW TEST:13.301 seconds]
+[sig-apps] Deployment
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSS
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:30:09.884: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6667
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating configMap with name projected-configmap-test-volume-map-1fb7de55-7d1a-45da-8d90-d65981c4d861
+STEP: Creating a pod to test consume configMaps
+Dec 10 11:30:10.042: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-118923a8-fd77-484d-8a28-ce5d7e3f1612" in namespace "projected-6667" to be "success or failure"
+Dec 10 11:30:10.045: INFO: Pod "pod-projected-configmaps-118923a8-fd77-484d-8a28-ce5d7e3f1612": Phase="Pending", Reason="", readiness=false. Elapsed: 2.790943ms
+Dec 10 11:30:12.057: INFO: Pod "pod-projected-configmaps-118923a8-fd77-484d-8a28-ce5d7e3f1612": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015128574s
+STEP: Saw pod success
+Dec 10 11:30:12.057: INFO: Pod "pod-projected-configmaps-118923a8-fd77-484d-8a28-ce5d7e3f1612" satisfied condition "success or failure"
+Dec 10 11:30:12.060: INFO: Trying to get logs from node dce82 pod pod-projected-configmaps-118923a8-fd77-484d-8a28-ce5d7e3f1612 container projected-configmap-volume-test: 
+STEP: delete the pod
+Dec 10 11:30:12.111: INFO: Waiting for pod pod-projected-configmaps-118923a8-fd77-484d-8a28-ce5d7e3f1612 to disappear
+Dec 10 11:30:12.114: INFO: Pod pod-projected-configmaps-118923a8-fd77-484d-8a28-ce5d7e3f1612 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:30:12.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-6667" for this suite.
+Dec 10 11:30:18.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:30:18.203: INFO: namespace projected-6667 deletion completed in 6.085514643s
+
+• [SLOW TEST:8.319 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
+  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:30:18.203: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename secrets
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-799
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating secret with name secret-test-map-2a118afd-005f-4adf-843c-e1137ca91c45
+STEP: Creating a pod to test consume secrets
+Dec 10 11:30:18.351: INFO: Waiting up to 5m0s for pod "pod-secrets-09cede7f-3975-44cc-b0e8-43dec0e4ae91" in namespace "secrets-799" to be "success or failure"
+Dec 10 11:30:18.353: INFO: Pod "pod-secrets-09cede7f-3975-44cc-b0e8-43dec0e4ae91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.257854ms
+Dec 10 11:30:20.357: INFO: Pod "pod-secrets-09cede7f-3975-44cc-b0e8-43dec0e4ae91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006021388s
+STEP: Saw pod success
+Dec 10 11:30:20.357: INFO: Pod "pod-secrets-09cede7f-3975-44cc-b0e8-43dec0e4ae91" satisfied condition "success or failure"
+Dec 10 11:30:20.359: INFO: Trying to get logs from node dce82 pod pod-secrets-09cede7f-3975-44cc-b0e8-43dec0e4ae91 container secret-volume-test: 
+STEP: delete the pod
+Dec 10 11:30:20.376: INFO: Waiting for pod pod-secrets-09cede7f-3975-44cc-b0e8-43dec0e4ae91 to disappear
+Dec 10 11:30:20.378: INFO: Pod pod-secrets-09cede7f-3975-44cc-b0e8-43dec0e4ae91 no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:30:20.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-799" for this suite.
+Dec 10 11:30:26.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:30:26.464: INFO: namespace secrets-799 deletion completed in 6.082681906s
+
+• [SLOW TEST:8.261 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SS
+------------------------------
+[sig-apps] Daemon set [Serial] 
+  should rollback without unnecessary restarts [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:30:26.464: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename daemonsets
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-701
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
+[It] should rollback without unnecessary restarts [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+Dec 10 11:30:26.629: INFO: Create a RollingUpdate DaemonSet
+Dec 10 11:30:26.633: INFO: Check that daemon pods launch on every node of the cluster
+Dec 10 11:30:26.639: INFO: Number of nodes with available pods: 0
+Dec 10 11:30:26.639: INFO: Node dce81 is running more than one daemon pod
+Dec 10 11:30:27.650: INFO: Number of nodes with available pods: 0
+Dec 10 11:30:27.650: INFO: Node dce81 is running more than one daemon pod
+Dec 10 11:30:28.648: INFO: Number of nodes with available pods: 0
+Dec 10 11:30:28.648: INFO: Node dce81 is running more than one daemon pod
+Dec 10 11:30:29.650: INFO: Number of nodes with available pods: 3
+Dec 10 11:30:29.650: INFO: Number of running nodes: 3, number of available pods: 3
+Dec 10 11:30:29.650: INFO: Update the DaemonSet to trigger a rollout
+Dec 10 11:30:29.657: INFO: Updating DaemonSet daemon-set
+Dec 10 11:30:36.672: INFO: Roll back the DaemonSet before rollout is complete
+Dec 10 11:30:36.678: INFO: Updating DaemonSet daemon-set
+Dec 10 11:30:36.678: INFO: Make sure DaemonSet rollback is complete
+Dec 10 11:30:36.681: INFO: Wrong image for pod: daemon-set-zs4pt. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
+Dec 10 11:30:36.681: INFO: Pod daemon-set-zs4pt is not available
+Dec 10 11:30:37.688: INFO: Wrong image for pod: daemon-set-zs4pt. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
+Dec 10 11:30:37.688: INFO: Pod daemon-set-zs4pt is not available
+Dec 10 11:30:38.687: INFO: Wrong image for pod: daemon-set-zs4pt. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
+Dec 10 11:30:38.687: INFO: Pod daemon-set-zs4pt is not available
+Dec 10 11:30:39.688: INFO: Wrong image for pod: daemon-set-zs4pt. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
+Dec 10 11:30:39.688: INFO: Pod daemon-set-zs4pt is not available
+Dec 10 11:30:40.688: INFO: Wrong image for pod: daemon-set-zs4pt. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
+Dec 10 11:30:40.688: INFO: Pod daemon-set-zs4pt is not available
+Dec 10 11:30:41.695: INFO: Pod daemon-set-gzs4d is not available
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
+STEP: Deleting DaemonSet "daemon-set"
+STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-701, will wait for the garbage collector to delete the pods
+Dec 10 11:30:41.779: INFO: Deleting DaemonSet.extensions daemon-set took: 6.794608ms
+Dec 10 11:30:42.180: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.213195ms
+Dec 10 11:30:51.983: INFO: Number of nodes with available pods: 0
+Dec 10 11:30:51.983: INFO: Number of running nodes: 0, number of available pods: 0
+Dec 10 11:30:51.986: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-701/daemonsets","resourceVersion":"383466"},"items":null}
+
+Dec 10 11:30:51.989: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-701/pods","resourceVersion":"383466"},"items":null}
+
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:30:52.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "daemonsets-701" for this suite.
+Dec 10 11:30:58.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:30:58.078: INFO: namespace daemonsets-701 deletion completed in 6.075628594s
+
+• [SLOW TEST:31.614 seconds]
+[sig-apps] Daemon set [Serial]
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should rollback without unnecessary restarts [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:30:58.079: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename gc
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-1784
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: create the rc
+STEP: delete the rc
+STEP: wait for the rc to be deleted
+STEP: Gathering metrics
+Dec 10 11:31:04.249: INFO: For apiserver_request_total:
+For apiserver_request_latencies_summary:
+For apiserver_init_events_total:
+For garbage_collector_attempt_to_delete_queue_latency:
+For garbage_collector_attempt_to_delete_work_duration:
+For garbage_collector_attempt_to_orphan_queue_latency:
+For garbage_collector_attempt_to_orphan_work_duration:
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+For garbage_collector_graph_changes_work_duration:
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+For namespace_queue_latency_sum:
+For namespace_queue_latency_count:
+For namespace_retries:
+For namespace_work_duration:
+For namespace_work_duration_sum:
+For namespace_work_duration_count:
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:31:04.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+W1210 11:31:04.249196      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
+STEP: Destroying namespace "gc-1784" for this suite.
+Dec 10 11:31:10.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:31:10.340: INFO: namespace gc-1784 deletion completed in 6.087593124s
+
+• [SLOW TEST:12.261 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Container Runtime blackbox test on terminated container 
+  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Container Runtime
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:31:10.340: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename container-runtime
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-8749
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: create the container
+STEP: wait for the container to reach Succeeded
+STEP: get the container status
+STEP: the container should be terminated
+STEP: the termination message should be set
+Dec 10 11:31:13.508: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
+STEP: delete the container
+[AfterEach] [k8s.io] Container Runtime
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:31:13.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-runtime-8749" for this suite.
+Dec 10 11:31:19.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:31:19.604: INFO: namespace container-runtime-8749 deletion completed in 6.086264601s
+
+• [SLOW TEST:9.264 seconds]
+[k8s.io] Container Runtime
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  blackbox test
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
+    on terminated container
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
+      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
+      /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSS
+------------------------------
+[k8s.io] Docker Containers 
+  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [k8s.io] Docker Containers
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:31:19.604: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename containers
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-5173
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test override command
+Dec 10 11:31:19.747: INFO: Waiting up to 5m0s for pod "client-containers-0bcc6e0b-9856-46db-8992-4f72def4cb9e" in namespace "containers-5173" to be "success or failure"
+Dec 10 11:31:19.751: INFO: Pod "client-containers-0bcc6e0b-9856-46db-8992-4f72def4cb9e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.817232ms
+Dec 10 11:31:21.755: INFO: Pod "client-containers-0bcc6e0b-9856-46db-8992-4f72def4cb9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00749032s
+Dec 10 11:31:23.759: INFO: Pod "client-containers-0bcc6e0b-9856-46db-8992-4f72def4cb9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012101817s
+STEP: Saw pod success
+Dec 10 11:31:23.759: INFO: Pod "client-containers-0bcc6e0b-9856-46db-8992-4f72def4cb9e" satisfied condition "success or failure"
+Dec 10 11:31:23.762: INFO: Trying to get logs from node dce82 pod client-containers-0bcc6e0b-9856-46db-8992-4f72def4cb9e container test-container: 
+STEP: delete the pod
+Dec 10 11:31:23.783: INFO: Waiting for pod client-containers-0bcc6e0b-9856-46db-8992-4f72def4cb9e to disappear
+Dec 10 11:31:23.785: INFO: Pod client-containers-0bcc6e0b-9856-46db-8992-4f72def4cb9e no longer exists
+[AfterEach] [k8s.io] Docker Containers
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:31:23.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "containers-5173" for this suite.
+Dec 10 11:31:29.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:31:29.867: INFO: namespace containers-5173 deletion completed in 6.077894581s
+
+• [SLOW TEST:10.263 seconds]
+[k8s.io] Docker Containers
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSS
+------------------------------
+[sig-storage] Projected secret 
+  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Projected secret
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:31:29.867: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6933
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating projection with secret that has name projected-secret-test-8b517c6d-e5bf-4ce2-8fbb-b51b186c98b7
+STEP: Creating a pod to test consume secrets
+Dec 10 11:31:30.084: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-daf8a2d2-c081-4034-b82e-9f3972faca7a" in namespace "projected-6933" to be "success or failure"
+Dec 10 11:31:30.087: INFO: Pod "pod-projected-secrets-daf8a2d2-c081-4034-b82e-9f3972faca7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.543986ms
+Dec 10 11:31:32.092: INFO: Pod "pod-projected-secrets-daf8a2d2-c081-4034-b82e-9f3972faca7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007521297s
+Dec 10 11:31:34.096: INFO: Pod "pod-projected-secrets-daf8a2d2-c081-4034-b82e-9f3972faca7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01128907s
+STEP: Saw pod success
+Dec 10 11:31:34.096: INFO: Pod "pod-projected-secrets-daf8a2d2-c081-4034-b82e-9f3972faca7a" satisfied condition "success or failure"
+Dec 10 11:31:34.098: INFO: Trying to get logs from node dce82 pod pod-projected-secrets-daf8a2d2-c081-4034-b82e-9f3972faca7a container projected-secret-volume-test: 
+STEP: delete the pod
+Dec 10 11:31:34.114: INFO: Waiting for pod pod-projected-secrets-daf8a2d2-c081-4034-b82e-9f3972faca7a to disappear
+Dec 10 11:31:34.117: INFO: Pod pod-projected-secrets-daf8a2d2-c081-4034-b82e-9f3972faca7a no longer exists
+[AfterEach] [sig-storage] Projected secret
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:31:34.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-6933" for this suite.
+Dec 10 11:31:40.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:31:40.264: INFO: namespace projected-6933 deletion completed in 6.143206214s
+
+• [SLOW TEST:10.396 seconds]
+[sig-storage] Projected secret
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
+  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSS
+------------------------------
+[sig-node] Downward API 
+  should provide host IP as an env var [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-node] Downward API
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:31:40.264: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename downward-api
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-4084
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide host IP as an env var [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test downward api env vars
+Dec 10 11:31:40.413: INFO: Waiting up to 5m0s for pod "downward-api-b99cddcf-604f-46c0-84bc-d54c0478f8ac" in namespace "downward-api-4084" to be "success or failure"
+Dec 10 11:31:40.415: INFO: Pod "downward-api-b99cddcf-604f-46c0-84bc-d54c0478f8ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.431469ms
+Dec 10 11:31:42.419: INFO: Pod "downward-api-b99cddcf-604f-46c0-84bc-d54c0478f8ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005510049s
+STEP: Saw pod success
+Dec 10 11:31:42.419: INFO: Pod "downward-api-b99cddcf-604f-46c0-84bc-d54c0478f8ac" satisfied condition "success or failure"
+Dec 10 11:31:42.421: INFO: Trying to get logs from node dce82 pod downward-api-b99cddcf-604f-46c0-84bc-d54c0478f8ac container dapi-container: 
+STEP: delete the pod
+Dec 10 11:31:42.441: INFO: Waiting for pod downward-api-b99cddcf-604f-46c0-84bc-d54c0478f8ac to disappear
+Dec 10 11:31:42.444: INFO: Pod downward-api-b99cddcf-604f-46c0-84bc-d54c0478f8ac no longer exists
+[AfterEach] [sig-node] Downward API
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:31:42.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-4084" for this suite.
+Dec 10 11:31:48.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:31:48.527: INFO: namespace downward-api-4084 deletion completed in 6.079940453s
+
+• [SLOW TEST:8.263 seconds]
+[sig-node] Downward API
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
+  should provide host IP as an env var [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSS
+------------------------------
+[sig-network] Proxy version v1 
+  should proxy through a service and a pod  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] version v1
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:31:48.527: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename proxy
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-7854
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should proxy through a service and a pod  [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: starting an echo server on multiple ports
+STEP: creating replication controller proxy-service-q49bv in namespace proxy-7854
+I1210 11:31:48.690665      19 runners.go:180] Created replication controller with name: proxy-service-q49bv, namespace: proxy-7854, replica count: 1
+I1210 11:31:49.741190      19 runners.go:180] proxy-service-q49bv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+I1210 11:31:50.741513      19 runners.go:180] proxy-service-q49bv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+I1210 11:31:51.741698      19 runners.go:180] proxy-service-q49bv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+I1210 11:31:52.741931      19 runners.go:180] proxy-service-q49bv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+I1210 11:31:53.742153      19 runners.go:180] proxy-service-q49bv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+I1210 11:31:54.742429      19 runners.go:180] proxy-service-q49bv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+I1210 11:31:55.742737      19 runners.go:180] proxy-service-q49bv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I1210 11:31:56.743013      19 runners.go:180] proxy-service-q49bv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I1210 11:31:57.743306      19 runners.go:180] proxy-service-q49bv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I1210 11:31:58.743713      19 runners.go:180] proxy-service-q49bv Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+Dec 10 11:31:58.747: INFO: setup took 10.071287853s, starting test cases
+STEP: running 16 cases, 20 attempts per case, 320 total attempts
+Dec 10 11:31:58.754: INFO: (0) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:162/proxy/: bar (200; 6.865979ms)
+Dec 10 11:31:58.755: INFO: (0) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:1080/proxy/: ... (200; 6.301487ms)
+Dec 10 11:31:58.755: INFO: (0) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:160/proxy/: foo (200; 5.667015ms)
+Dec 10 11:31:58.755: INFO: (0) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:1080/proxy/: test<... (200; 6.343352ms)
+Dec 10 11:31:58.755: INFO: (0) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm/proxy/: test (200; 7.471683ms)
+Dec 10 11:31:58.756: INFO: (0) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname1/proxy/: foo (200; 8.333231ms)
+Dec 10 11:31:58.756: INFO: (0) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:162/proxy/: bar (200; 7.529371ms)
+Dec 10 11:31:58.757: INFO: (0) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname1/proxy/: foo (200; 8.420627ms)
+Dec 10 11:31:58.761: INFO: (0) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname2/proxy/: bar (200; 13.15434ms)
+Dec 10 11:31:58.761: INFO: (0) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname2/proxy/: bar (200; 12.213835ms)
+Dec 10 11:31:58.763: INFO: (0) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:160/proxy/: foo (200; 15.004145ms)
+Dec 10 11:31:58.764: INFO: (0) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname2/proxy/: tls qux (200; 14.585163ms)
+Dec 10 11:31:58.764: INFO: (0) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:443/proxy/: ... (200; 3.752795ms)
+Dec 10 11:31:58.779: INFO: (1) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname1/proxy/: foo (200; 11.867056ms)
+Dec 10 11:31:58.779: INFO: (1) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:460/proxy/: tls baz (200; 12.365845ms)
+Dec 10 11:31:58.782: INFO: (1) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname2/proxy/: bar (200; 14.973667ms)
+Dec 10 11:31:58.782: INFO: (1) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname1/proxy/: foo (200; 14.90837ms)
+Dec 10 11:31:58.782: INFO: (1) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:162/proxy/: bar (200; 15.032843ms)
+Dec 10 11:31:58.783: INFO: (1) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:443/proxy/: test<... (200; 16.843717ms)
+Dec 10 11:31:58.784: INFO: (1) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname2/proxy/: tls qux (200; 17.144581ms)
+Dec 10 11:31:58.784: INFO: (1) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname1/proxy/: tls baz (200; 17.085578ms)
+Dec 10 11:31:58.784: INFO: (1) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:160/proxy/: foo (200; 17.247173ms)
+Dec 10 11:31:58.784: INFO: (1) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm/proxy/: test (200; 17.387193ms)
+Dec 10 11:31:58.789: INFO: (2) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:443/proxy/: ... (200; 5.349093ms)
+Dec 10 11:31:58.791: INFO: (2) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:162/proxy/: bar (200; 5.415724ms)
+Dec 10 11:31:58.791: INFO: (2) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm/proxy/: test (200; 6.165123ms)
+Dec 10 11:31:58.791: INFO: (2) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:1080/proxy/: test<... (200; 5.556624ms)
+Dec 10 11:31:58.791: INFO: (2) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:462/proxy/: tls qux (200; 6.318439ms)
+Dec 10 11:31:58.806: INFO: (2) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:162/proxy/: bar (200; 21.588312ms)
+Dec 10 11:31:58.806: INFO: (2) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname1/proxy/: tls baz (200; 21.289416ms)
+Dec 10 11:31:58.806: INFO: (2) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname2/proxy/: tls qux (200; 20.857224ms)
+Dec 10 11:31:58.806: INFO: (2) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:160/proxy/: foo (200; 21.126667ms)
+Dec 10 11:31:58.806: INFO: (2) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname2/proxy/: bar (200; 20.966022ms)
+Dec 10 11:31:58.806: INFO: (2) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:460/proxy/: tls baz (200; 21.036295ms)
+Dec 10 11:31:58.806: INFO: (2) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname1/proxy/: foo (200; 21.375547ms)
+Dec 10 11:31:58.806: INFO: (2) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname2/proxy/: bar (200; 21.502739ms)
+Dec 10 11:31:58.806: INFO: (2) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:160/proxy/: foo (200; 20.76191ms)
+Dec 10 11:31:58.806: INFO: (2) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname1/proxy/: foo (200; 21.601462ms)
+Dec 10 11:31:58.822: INFO: (3) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:443/proxy/: ... (200; 16.74644ms)
+Dec 10 11:31:58.823: INFO: (3) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:460/proxy/: tls baz (200; 16.903962ms)
+Dec 10 11:31:58.824: INFO: (3) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:1080/proxy/: test<... (200; 16.919515ms)
+Dec 10 11:31:58.824: INFO: (3) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:462/proxy/: tls qux (200; 14.3152ms)
+Dec 10 11:31:58.824: INFO: (3) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname1/proxy/: tls baz (200; 14.69877ms)
+Dec 10 11:31:58.825: INFO: (3) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm/proxy/: test (200; 15.355963ms)
+Dec 10 11:31:58.825: INFO: (3) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname2/proxy/: bar (200; 18.337565ms)
+Dec 10 11:31:58.825: INFO: (3) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname1/proxy/: foo (200; 15.82074ms)
+Dec 10 11:31:58.825: INFO: (3) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname2/proxy/: tls qux (200; 18.351098ms)
+Dec 10 11:31:58.825: INFO: (3) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname2/proxy/: bar (200; 15.942442ms)
+Dec 10 11:31:58.825: INFO: (3) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname1/proxy/: foo (200; 18.521526ms)
+Dec 10 11:31:58.833: INFO: (3) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:162/proxy/: bar (200; 26.143603ms)
+Dec 10 11:31:58.840: INFO: (4) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:162/proxy/: bar (200; 6.624522ms)
+Dec 10 11:31:58.840: INFO: (4) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:462/proxy/: tls qux (200; 7.466002ms)
+Dec 10 11:31:58.840: INFO: (4) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm/proxy/: test (200; 7.584843ms)
+Dec 10 11:31:58.840: INFO: (4) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:1080/proxy/: ... (200; 7.445424ms)
+Dec 10 11:31:58.841: INFO: (4) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:160/proxy/: foo (200; 8.164839ms)
+Dec 10 11:31:58.841: INFO: (4) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname1/proxy/: tls baz (200; 7.980066ms)
+Dec 10 11:31:58.841: INFO: (4) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname1/proxy/: foo (200; 8.152033ms)
+Dec 10 11:31:58.841: INFO: (4) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:460/proxy/: tls baz (200; 8.387819ms)
+Dec 10 11:31:58.841: INFO: (4) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname2/proxy/: bar (200; 8.503627ms)
+Dec 10 11:31:58.841: INFO: (4) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:1080/proxy/: test<... (200; 8.488224ms)
+Dec 10 11:31:58.841: INFO: (4) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname1/proxy/: foo (200; 8.447697ms)
+Dec 10 11:31:58.842: INFO: (4) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:443/proxy/: test<... (200; 77.149414ms)
+Dec 10 11:31:58.920: INFO: (5) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:462/proxy/: tls qux (200; 77.583777ms)
+Dec 10 11:31:58.920: INFO: (5) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm/proxy/: test (200; 77.920298ms)
+Dec 10 11:31:58.920: INFO: (5) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:460/proxy/: tls baz (200; 77.483598ms)
+Dec 10 11:31:58.921: INFO: (5) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:1080/proxy/: ... (200; 78.404417ms)
+Dec 10 11:31:58.921: INFO: (5) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:160/proxy/: foo (200; 78.303297ms)
+Dec 10 11:31:58.922: INFO: (5) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname1/proxy/: foo (200; 79.68557ms)
+Dec 10 11:31:58.922: INFO: (5) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname1/proxy/: foo (200; 79.691613ms)
+Dec 10 11:31:58.922: INFO: (5) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname2/proxy/: bar (200; 79.561427ms)
+Dec 10 11:31:58.923: INFO: (5) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname2/proxy/: bar (200; 80.941034ms)
+Dec 10 11:31:58.923: INFO: (5) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname2/proxy/: tls qux (200; 80.304817ms)
+Dec 10 11:31:58.923: INFO: (5) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname1/proxy/: tls baz (200; 80.950353ms)
+Dec 10 11:31:58.930: INFO: (6) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:162/proxy/: bar (200; 6.509733ms)
+Dec 10 11:31:58.931: INFO: (6) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:160/proxy/: foo (200; 5.941567ms)
+Dec 10 11:31:58.931: INFO: (6) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:460/proxy/: tls baz (200; 5.776562ms)
+Dec 10 11:31:58.931: INFO: (6) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:1080/proxy/: test<... (200; 6.170308ms)
+Dec 10 11:31:58.931: INFO: (6) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm/proxy/: test (200; 7.352858ms)
+Dec 10 11:31:58.931: INFO: (6) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:1080/proxy/: ... (200; 6.625235ms)
+Dec 10 11:31:58.931: INFO: (6) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:462/proxy/: tls qux (200; 7.061205ms)
+Dec 10 11:31:58.931: INFO: (6) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:162/proxy/: bar (200; 6.490306ms)
+Dec 10 11:31:58.931: INFO: (6) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:443/proxy/: ... (200; 4.388432ms)
+Dec 10 11:31:58.937: INFO: (7) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm/proxy/: test (200; 4.233613ms)
+Dec 10 11:31:58.937: INFO: (7) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:443/proxy/: test<... (200; 4.301559ms)
+Dec 10 11:31:58.938: INFO: (7) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname2/proxy/: bar (200; 5.216711ms)
+Dec 10 11:31:58.938: INFO: (7) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname2/proxy/: bar (200; 5.413199ms)
+Dec 10 11:31:58.938: INFO: (7) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname2/proxy/: tls qux (200; 5.140845ms)
+Dec 10 11:31:58.938: INFO: (7) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname1/proxy/: foo (200; 5.04107ms)
+Dec 10 11:31:58.938: INFO: (7) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname1/proxy/: tls baz (200; 5.136742ms)
+Dec 10 11:31:58.943: INFO: (8) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:1080/proxy/: ... (200; 4.448291ms)
+Dec 10 11:31:58.944: INFO: (8) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:162/proxy/: bar (200; 3.936693ms)
+Dec 10 11:31:58.944: INFO: (8) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:160/proxy/: foo (200; 4.197111ms)
+Dec 10 11:31:58.944: INFO: (8) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:460/proxy/: tls baz (200; 4.492992ms)
+Dec 10 11:31:58.944: INFO: (8) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:1080/proxy/: test<... (200; 4.918369ms)
+Dec 10 11:31:58.944: INFO: (8) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:162/proxy/: bar (200; 4.623928ms)
+Dec 10 11:31:58.944: INFO: (8) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm/proxy/: test (200; 4.099805ms)
+Dec 10 11:31:58.944: INFO: (8) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:160/proxy/: foo (200; 4.7688ms)
+Dec 10 11:31:58.944: INFO: (8) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:462/proxy/: tls qux (200; 3.812465ms)
+Dec 10 11:31:58.944: INFO: (8) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:443/proxy/: test<... (200; 9.621419ms)
+Dec 10 11:31:58.956: INFO: (9) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname1/proxy/: foo (200; 8.805508ms)
+Dec 10 11:31:58.956: INFO: (9) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:160/proxy/: foo (200; 9.293107ms)
+Dec 10 11:31:58.956: INFO: (9) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname1/proxy/: tls baz (200; 8.811231ms)
+Dec 10 11:31:58.956: INFO: (9) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:460/proxy/: tls baz (200; 9.434717ms)
+Dec 10 11:31:58.956: INFO: (9) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:162/proxy/: bar (200; 9.659995ms)
+Dec 10 11:31:58.956: INFO: (9) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:1080/proxy/: ... (200; 9.82898ms)
+Dec 10 11:31:58.956: INFO: (9) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm/proxy/: test (200; 9.223805ms)
+Dec 10 11:31:58.956: INFO: (9) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname2/proxy/: bar (200; 10.067454ms)
+Dec 10 11:31:58.956: INFO: (9) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname2/proxy/: tls qux (200; 10.014082ms)
+Dec 10 11:31:58.959: INFO: (10) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:462/proxy/: tls qux (200; 2.751249ms)
+Dec 10 11:31:58.960: INFO: (10) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:443/proxy/: ... (200; 4.583748ms)
+Dec 10 11:31:58.961: INFO: (10) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:160/proxy/: foo (200; 4.603245ms)
+Dec 10 11:31:58.961: INFO: (10) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:160/proxy/: foo (200; 4.273372ms)
+Dec 10 11:31:58.961: INFO: (10) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:162/proxy/: bar (200; 4.181303ms)
+Dec 10 11:31:58.961: INFO: (10) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm/proxy/: test (200; 4.243672ms)
+Dec 10 11:31:58.962: INFO: (10) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname1/proxy/: foo (200; 5.457001ms)
+Dec 10 11:31:58.962: INFO: (10) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:1080/proxy/: test<... (200; 5.339069ms)
+Dec 10 11:31:58.962: INFO: (10) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname2/proxy/: bar (200; 5.757152ms)
+Dec 10 11:31:58.962: INFO: (10) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname2/proxy/: tls qux (200; 5.020664ms)
+Dec 10 11:31:58.963: INFO: (10) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname1/proxy/: foo (200; 5.571097ms)
+Dec 10 11:31:58.963: INFO: (10) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname2/proxy/: bar (200; 5.921658ms)
+Dec 10 11:31:58.963: INFO: (10) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname1/proxy/: tls baz (200; 6.380205ms)
+Dec 10 11:31:58.965: INFO: (11) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:443/proxy/: test (200; 4.250133ms)
+Dec 10 11:31:58.967: INFO: (11) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:1080/proxy/: ... (200; 4.425996ms)
+Dec 10 11:31:58.967: INFO: (11) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:160/proxy/: foo (200; 4.200348ms)
+Dec 10 11:31:58.967: INFO: (11) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname1/proxy/: foo (200; 4.423711ms)
+Dec 10 11:31:58.967: INFO: (11) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:462/proxy/: tls qux (200; 4.56332ms)
+Dec 10 11:31:58.968: INFO: (11) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:1080/proxy/: test<... (200; 4.464508ms)
+Dec 10 11:31:58.968: INFO: (11) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:160/proxy/: foo (200; 4.418273ms)
+Dec 10 11:31:58.969: INFO: (11) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname2/proxy/: tls qux (200; 5.985242ms)
+Dec 10 11:31:58.969: INFO: (11) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname2/proxy/: bar (200; 6.065973ms)
+Dec 10 11:31:58.969: INFO: (11) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname1/proxy/: foo (200; 6.02657ms)
+Dec 10 11:31:58.969: INFO: (11) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname1/proxy/: tls baz (200; 6.043967ms)
+Dec 10 11:31:58.969: INFO: (11) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname2/proxy/: bar (200; 6.162094ms)
+Dec 10 11:31:58.972: INFO: (12) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:160/proxy/: foo (200; 2.890788ms)
+Dec 10 11:31:58.972: INFO: (12) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:460/proxy/: tls baz (200; 3.151807ms)
+Dec 10 11:31:58.972: INFO: (12) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:443/proxy/: ... (200; 3.810489ms)
+Dec 10 11:31:58.974: INFO: (12) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:162/proxy/: bar (200; 4.513425ms)
+Dec 10 11:31:58.974: INFO: (12) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname2/proxy/: tls qux (200; 4.500216ms)
+Dec 10 11:31:58.974: INFO: (12) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:160/proxy/: foo (200; 4.462287ms)
+Dec 10 11:31:58.975: INFO: (12) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname2/proxy/: bar (200; 4.836168ms)
+Dec 10 11:31:58.975: INFO: (12) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:462/proxy/: tls qux (200; 4.557534ms)
+Dec 10 11:31:58.975: INFO: (12) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm/proxy/: test (200; 4.867014ms)
+Dec 10 11:31:58.975: INFO: (12) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:1080/proxy/: test<... (200; 4.700504ms)
+Dec 10 11:31:58.975: INFO: (12) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname1/proxy/: tls baz (200; 5.100861ms)
+Dec 10 11:31:58.975: INFO: (12) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname2/proxy/: bar (200; 5.023594ms)
+Dec 10 11:31:58.975: INFO: (12) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname1/proxy/: foo (200; 5.662198ms)
+Dec 10 11:31:58.975: INFO: (12) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname1/proxy/: foo (200; 5.200711ms)
+Dec 10 11:31:58.980: INFO: (13) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:162/proxy/: bar (200; 4.446039ms)
+Dec 10 11:31:58.980: INFO: (13) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:462/proxy/: tls qux (200; 4.46689ms)
+Dec 10 11:31:58.980: INFO: (13) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:460/proxy/: tls baz (200; 4.413521ms)
+Dec 10 11:31:58.980: INFO: (13) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm/proxy/: test (200; 4.080835ms)
+Dec 10 11:31:58.981: INFO: (13) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:160/proxy/: foo (200; 4.914183ms)
+Dec 10 11:31:58.981: INFO: (13) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:1080/proxy/: ... (200; 4.851273ms)
+Dec 10 11:31:58.981: INFO: (13) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:162/proxy/: bar (200; 4.833219ms)
+Dec 10 11:31:58.981: INFO: (13) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:1080/proxy/: test<... (200; 4.935516ms)
+Dec 10 11:31:58.981: INFO: (13) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname1/proxy/: tls baz (200; 5.588581ms)
+Dec 10 11:31:58.981: INFO: (13) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:443/proxy/: ... (200; 3.685792ms)
+Dec 10 11:31:58.986: INFO: (14) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname2/proxy/: tls qux (200; 4.483482ms)
+Dec 10 11:31:58.986: INFO: (14) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:162/proxy/: bar (200; 3.632222ms)
+Dec 10 11:31:58.986: INFO: (14) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:1080/proxy/: test<... (200; 3.841232ms)
+Dec 10 11:31:58.986: INFO: (14) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm/proxy/: test (200; 4.284134ms)
+Dec 10 11:31:58.986: INFO: (14) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:162/proxy/: bar (200; 4.231879ms)
+Dec 10 11:31:58.987: INFO: (14) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname2/proxy/: bar (200; 4.855134ms)
+Dec 10 11:31:58.987: INFO: (14) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname1/proxy/: foo (200; 5.276908ms)
+Dec 10 11:31:58.987: INFO: (14) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname2/proxy/: bar (200; 5.395419ms)
+Dec 10 11:31:58.988: INFO: (14) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname1/proxy/: foo (200; 5.480577ms)
+Dec 10 11:31:58.988: INFO: (14) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname1/proxy/: tls baz (200; 5.279201ms)
+Dec 10 11:31:58.992: INFO: (15) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:443/proxy/: test (200; 4.781843ms)
+Dec 10 11:31:58.993: INFO: (15) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:1080/proxy/: test<... (200; 4.461786ms)
+Dec 10 11:31:58.993: INFO: (15) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:160/proxy/: foo (200; 4.710219ms)
+Dec 10 11:31:58.993: INFO: (15) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:1080/proxy/: ... (200; 4.66279ms)
+Dec 10 11:31:58.993: INFO: (15) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:162/proxy/: bar (200; 5.083284ms)
+Dec 10 11:31:58.993: INFO: (15) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname2/proxy/: bar (200; 5.303909ms)
+Dec 10 11:31:58.993: INFO: (15) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:460/proxy/: tls baz (200; 5.158407ms)
+Dec 10 11:31:58.993: INFO: (15) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:162/proxy/: bar (200; 5.288505ms)
+Dec 10 11:31:58.994: INFO: (15) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:160/proxy/: foo (200; 5.366265ms)
+Dec 10 11:31:58.994: INFO: (15) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:462/proxy/: tls qux (200; 5.500754ms)
+Dec 10 11:31:58.994: INFO: (15) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname1/proxy/: foo (200; 5.904636ms)
+Dec 10 11:31:58.994: INFO: (15) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname1/proxy/: tls baz (200; 5.826057ms)
+Dec 10 11:31:58.994: INFO: (15) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname1/proxy/: foo (200; 5.878155ms)
+Dec 10 11:31:58.994: INFO: (15) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname2/proxy/: bar (200; 6.06628ms)
+Dec 10 11:31:58.994: INFO: (15) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname2/proxy/: tls qux (200; 6.060586ms)
+Dec 10 11:31:58.997: INFO: (16) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:162/proxy/: bar (200; 2.576958ms)
+Dec 10 11:31:58.997: INFO: (16) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:462/proxy/: tls qux (200; 3.237624ms)
+Dec 10 11:31:58.997: INFO: (16) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:160/proxy/: foo (200; 2.906193ms)
+Dec 10 11:31:58.998: INFO: (16) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm/proxy/: test (200; 3.368858ms)
+Dec 10 11:31:58.998: INFO: (16) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:443/proxy/: ... (200; 3.789252ms)
+Dec 10 11:31:58.998: INFO: (16) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:1080/proxy/: test<... (200; 3.874774ms)
+Dec 10 11:31:58.998: INFO: (16) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:162/proxy/: bar (200; 3.751598ms)
+Dec 10 11:31:58.998: INFO: (16) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname2/proxy/: tls qux (200; 3.943626ms)
+Dec 10 11:31:58.998: INFO: (16) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:160/proxy/: foo (200; 3.818278ms)
+Dec 10 11:31:58.998: INFO: (16) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname1/proxy/: foo (200; 3.653689ms)
+Dec 10 11:31:58.998: INFO: (16) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname2/proxy/: bar (200; 3.621728ms)
+Dec 10 11:31:58.998: INFO: (16) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname2/proxy/: bar (200; 4.137001ms)
+Dec 10 11:31:59.001: INFO: (17) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:162/proxy/: bar (200; 2.389651ms)
+Dec 10 11:31:59.002: INFO: (17) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:460/proxy/: tls baz (200; 2.673011ms)
+Dec 10 11:31:59.002: INFO: (17) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:162/proxy/: bar (200; 2.882195ms)
+Dec 10 11:31:59.002: INFO: (17) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:1080/proxy/: test<... (200; 3.102823ms)
+Dec 10 11:31:59.002: INFO: (17) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:462/proxy/: tls qux (200; 3.274249ms)
+Dec 10 11:31:59.002: INFO: (17) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:1080/proxy/: ... (200; 3.090339ms)
+Dec 10 11:31:59.002: INFO: (17) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:443/proxy/: test (200; 3.186909ms)
+Dec 10 11:31:59.002: INFO: (17) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname1/proxy/: foo (200; 3.366139ms)
+Dec 10 11:31:59.003: INFO: (17) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname2/proxy/: tls qux (200; 3.817461ms)
+Dec 10 11:31:59.003: INFO: (17) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname1/proxy/: tls baz (200; 4.02091ms)
+Dec 10 11:31:59.003: INFO: (17) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname1/proxy/: foo (200; 4.085937ms)
+Dec 10 11:31:59.003: INFO: (17) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname2/proxy/: bar (200; 4.005033ms)
+Dec 10 11:31:59.003: INFO: (17) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname2/proxy/: bar (200; 3.985948ms)
+Dec 10 11:31:59.017: INFO: (18) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:162/proxy/: bar (200; 13.843021ms)
+Dec 10 11:31:59.017: INFO: (18) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:443/proxy/: test (200; 14.683607ms)
+Dec 10 11:31:59.018: INFO: (18) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:460/proxy/: tls baz (200; 14.42443ms)
+Dec 10 11:31:59.018: INFO: (18) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:162/proxy/: bar (200; 14.504603ms)
+Dec 10 11:31:59.018: INFO: (18) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:1080/proxy/: ... (200; 14.612683ms)
+Dec 10 11:31:59.018: INFO: (18) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:160/proxy/: foo (200; 14.572921ms)
+Dec 10 11:31:59.018: INFO: (18) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:160/proxy/: foo (200; 14.348157ms)
+Dec 10 11:31:59.018: INFO: (18) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:1080/proxy/: test<... (200; 14.395935ms)
+Dec 10 11:31:59.019: INFO: (18) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname1/proxy/: tls baz (200; 15.976375ms)
+Dec 10 11:31:59.020: INFO: (18) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname2/proxy/: tls qux (200; 15.940071ms)
+Dec 10 11:31:59.020: INFO: (18) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname1/proxy/: foo (200; 16.43765ms)
+Dec 10 11:31:59.020: INFO: (18) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname2/proxy/: bar (200; 16.036874ms)
+Dec 10 11:31:59.020: INFO: (18) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname1/proxy/: foo (200; 16.350663ms)
+Dec 10 11:31:59.020: INFO: (18) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname2/proxy/: bar (200; 16.482408ms)
+Dec 10 11:31:59.034: INFO: (19) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname1/proxy/: foo (200; 13.76036ms)
+Dec 10 11:31:59.034: INFO: (19) /api/v1/namespaces/proxy-7854/services/http:proxy-service-q49bv:portname2/proxy/: bar (200; 13.498309ms)
+Dec 10 11:31:59.037: INFO: (19) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:162/proxy/: bar (200; 17.021556ms)
+Dec 10 11:31:59.037: INFO: (19) /api/v1/namespaces/proxy-7854/pods/http:proxy-service-q49bv-wtssm:1080/proxy/: ... (200; 16.748583ms)
+Dec 10 11:31:59.037: INFO: (19) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:462/proxy/: tls qux (200; 17.120351ms)
+Dec 10 11:31:59.037: INFO: (19) /api/v1/namespaces/proxy-7854/pods/https:proxy-service-q49bv-wtssm:443/proxy/: test (200; 17.183884ms)
+Dec 10 11:31:59.038: INFO: (19) /api/v1/namespaces/proxy-7854/pods/proxy-service-q49bv-wtssm:1080/proxy/: test<... (200; 17.800177ms)
+Dec 10 11:31:59.038: INFO: (19) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname1/proxy/: tls baz (200; 18.506034ms)
+Dec 10 11:31:59.039: INFO: (19) /api/v1/namespaces/proxy-7854/services/https:proxy-service-q49bv:tlsportname2/proxy/: tls qux (200; 18.241234ms)
+Dec 10 11:31:59.039: INFO: (19) /api/v1/namespaces/proxy-7854/services/proxy-service-q49bv:portname2/proxy/: bar (200; 17.912012ms)
+STEP: deleting ReplicationController proxy-service-q49bv in namespace proxy-7854, will wait for the garbage collector to delete the pods
+Dec 10 11:31:59.096: INFO: Deleting ReplicationController proxy-service-q49bv took: 5.071823ms
+Dec 10 11:31:59.496: INFO: Terminating ReplicationController proxy-service-q49bv pods took: 400.15882ms
+[AfterEach] version v1
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:32:01.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "proxy-7854" for this suite.
+Dec 10 11:32:08.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:32:08.078: INFO: namespace proxy-7854 deletion completed in 6.077036938s
+
+• [SLOW TEST:19.551 seconds]
+[sig-network] Proxy
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  version v1
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
+    should proxy through a service and a pod  [Conformance]
+    /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+S
+------------------------------
+[sig-node] Downward API 
+  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-node] Downward API
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:32:08.078: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename downward-api
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-8353
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating a pod to test downward api env vars
+Dec 10 11:32:08.220: INFO: Waiting up to 5m0s for pod "downward-api-04cafb30-7eb0-4dad-8a6f-a926ad29979d" in namespace "downward-api-8353" to be "success or failure"
+Dec 10 11:32:08.222: INFO: Pod "downward-api-04cafb30-7eb0-4dad-8a6f-a926ad29979d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160894ms
+Dec 10 11:32:10.226: INFO: Pod "downward-api-04cafb30-7eb0-4dad-8a6f-a926ad29979d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00644491s
+STEP: Saw pod success
+Dec 10 11:32:10.226: INFO: Pod "downward-api-04cafb30-7eb0-4dad-8a6f-a926ad29979d" satisfied condition "success or failure"
+Dec 10 11:32:10.230: INFO: Trying to get logs from node dce82 pod downward-api-04cafb30-7eb0-4dad-8a6f-a926ad29979d container dapi-container: 
+STEP: delete the pod
+Dec 10 11:32:10.252: INFO: Waiting for pod downward-api-04cafb30-7eb0-4dad-8a6f-a926ad29979d to disappear
+Dec 10 11:32:10.255: INFO: Pod downward-api-04cafb30-7eb0-4dad-8a6f-a926ad29979d no longer exists
+[AfterEach] [sig-node] Downward API
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:32:10.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-8353" for this suite.
+Dec 10 11:32:16.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:32:16.332: INFO: namespace downward-api-8353 deletion completed in 6.073427749s
+
+• [SLOW TEST:8.254 seconds]
+[sig-node] Downward API
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
+  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected secret 
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+[BeforeEach] [sig-storage] Projected secret
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+STEP: Creating a kubernetes client
+Dec 10 11:32:16.333: INFO: >>> kubeConfig: /tmp/kubeconfig-845205613
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7752
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+STEP: Creating projection with secret that has name projected-secret-test-map-92167a2c-288b-4f9b-bcef-930940dd00dd
+STEP: Creating a pod to test consume secrets
+Dec 10 11:32:16.486: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b2071bfb-3430-4391-bed1-ecb1afd173cc" in namespace "projected-7752" to be "success or failure"
+Dec 10 11:32:16.489: INFO: Pod "pod-projected-secrets-b2071bfb-3430-4391-bed1-ecb1afd173cc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.29263ms
+Dec 10 11:32:18.495: INFO: Pod "pod-projected-secrets-b2071bfb-3430-4391-bed1-ecb1afd173cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009410351s
+STEP: Saw pod success
+Dec 10 11:32:18.495: INFO: Pod "pod-projected-secrets-b2071bfb-3430-4391-bed1-ecb1afd173cc" satisfied condition "success or failure"
+Dec 10 11:32:18.498: INFO: Trying to get logs from node dce82 pod pod-projected-secrets-b2071bfb-3430-4391-bed1-ecb1afd173cc container projected-secret-volume-test: 
+STEP: delete the pod
+Dec 10 11:32:18.509: INFO: Waiting for pod pod-projected-secrets-b2071bfb-3430-4391-bed1-ecb1afd173cc to disappear
+Dec 10 11:32:18.511: INFO: Pod pod-projected-secrets-b2071bfb-3430-4391-bed1-ecb1afd173cc no longer exists
+[AfterEach] [sig-storage] Projected secret
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+Dec 10 11:32:18.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-7752" for this suite.
+Dec 10 11:32:24.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 10 11:32:24.596: INFO: namespace projected-7752 deletion completed in 6.081767089s
+
+• [SLOW TEST:8.263 seconds]
+[sig-storage] Projected secret
+/workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.15.3-beta.0.68+2d3c76f9091b6b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSDec 10 11:32:24.596: INFO: Running AfterSuite actions on all nodes
+Dec 10 11:32:24.596: INFO: Running AfterSuite actions on node 1
+Dec 10 11:32:24.596: INFO: Skipping dumping logs from cluster
+
+Ran 215 of 4413 Specs in 5702.436 seconds
+SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4198 Skipped
+PASS
+
+Ginkgo ran 1 suite in 1h35m3.858415652s
+Test Suite Passed
diff --git a/v1.15/dce/junit_01.xml b/v1.15/dce/junit_01.xml
new file mode 100644
index 0000000000..f0001ac5e6
--- /dev/null
+++ b/v1.15/dce/junit_01.xml
@@ -0,0 +1,12812 @@
+
+  
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+  
\ No newline at end of file