From 89a06f95bf7e97c674bee19eb3bc130c3cc1cba7 Mon Sep 17 00:00:00 2001 From: OwenYang Date: Fri, 21 Dec 2018 06:40:44 +0800 Subject: [PATCH] Conformance results for v1.13/dce (#418) * Conformance results for v1.13/dce * Conformance results for v1.13/dce --- v1.13/dce/PRODUCT.yaml | 6 + v1.13/dce/README.md | 69 + v1.13/dce/e2e.log | 10094 +++++++++++++++++++++++++++++++++++++++ v1.13/dce/junit_01.xml | 5441 +++++++++++++++++++++ v1.13/dce/version.txt | 2 + 5 files changed, 15612 insertions(+) create mode 100644 v1.13/dce/PRODUCT.yaml create mode 100644 v1.13/dce/README.md create mode 100644 v1.13/dce/e2e.log create mode 100644 v1.13/dce/junit_01.xml create mode 100644 v1.13/dce/version.txt diff --git a/v1.13/dce/PRODUCT.yaml b/v1.13/dce/PRODUCT.yaml new file mode 100644 index 0000000000..840d060e7d --- /dev/null +++ b/v1.13/dce/PRODUCT.yaml @@ -0,0 +1,6 @@ +vendor: DaoCloud +name: DaoCloud Enterprise +version: v3.0.3 +website_url: http://www.daocloud.io/dce +documentation_url: http://guide.daocloud.io/dce-v3.0 +product_logo_url: http://guide.daocloud.io/download/attachments/524290/global.logo?version=2&modificationDate=1469173304363&api=v2 diff --git a/v1.13/dce/README.md b/v1.13/dce/README.md new file mode 100644 index 0000000000..77ac708f73 --- /dev/null +++ b/v1.13/dce/README.md @@ -0,0 +1,69 @@ +# DaoCloud Enterprise + +DaoCloud Enterprise is a platform based on Kubernetes which developed by [DaoCloud](https://www.daocloud.io). + +## Setup DCE Cluster + +First install DaoCloud Enterprise 3.0.3, which is based on Kubernetes 1.13.1. +To install DaoCloud Enterprise, run the following commands on CentOS 7.4 System: +``` +export VERSION=3.0.3 +curl -L https://dce.daocloud.io/DaoCloud_Enterprise/$VERSION/os-requirements > /usr/local/bin/os-requirements +chmod +x /usr/local/bin/os-requirements +/usr/local/bin/os-requirements +bash -c "$(docker run -i --rm daocloud.io/daocloud/dce:$VERSION install)" +``` +To add more nodes to the cluster, the user need log into DaoCloud Enterprise control panel and follow instructions under node management section. + +After the installation, run ```docker exec -it `docker ps | grep dce-kube-controller | awk '{print$1}'` bash``` to enter the DaoCloud Enterprise Kubernetes controller container. + +## Run conformance tests + +The standard tool for running these tests is +[Sonobuoy](https://github.com/heptio/sonobuoy). + +Download a [binary release](https://github.com/heptio/sonobuoy/releases) of the CLI, or build it yourself by running: + +``` +$ go get -u -v github.com/heptio/sonobuoy +``` + +Deploy a Sonobuoy pod to your cluster with: + +``` +$ sonobuoy run +``` + +View actively running pods: + +``` +$ sonobuoy status +``` + +To inspect the logs: + +``` +$ sonobuoy logs +``` + +Once `sonobuoy status` shows the run as `completed`, copy the output directory from the main Sonobuoy pod to +a local directory: + +``` +$ sonobuoy retrieve . +``` + +This copies a single `.tar.gz` snapshot from the Sonobuoy pod into your local +`.` directory. Extract the contents into `./results` with: + +``` +mkdir ./results; tar xzf *.tar.gz -C ./results +``` + +**NOTE:** The two files required for submission are located in the tarball under **plugins/e2e/results/{e2e.log,junit.xml}**. + +To clean up Kubernetes objects created by Sonobuoy, run: + +``` +sonobuoy delete +``` diff --git a/v1.13/dce/e2e.log b/v1.13/dce/e2e.log new file mode 100644 index 0000000000..ec4571c3f6 --- /dev/null +++ b/v1.13/dce/e2e.log @@ -0,0 +1,10094 @@ +I1220 07:21:56.795708 17 test_context.go:358] Using a temporary kubeconfig file from in-cluster config : /tmp/kubeconfig-647384748 +I1220 07:21:56.803133 17 e2e.go:224] Starting e2e run "ed5ee1a0-0427-11e9-b141-0a58ac1c1472" on Ginkgo node 1 +Running Suite: Kubernetes e2e suite +=================================== +Random Seed: 1545290511 - Will randomize all specs +Will run 201 of 1946 specs + +Dec 20 07:21:57.098: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +Dec 20 07:21:57.116: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable +Dec 20 07:21:57.166: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready +Dec 20 07:21:57.226: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) +Dec 20 07:21:57.227: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready. +Dec 20 07:21:57.227: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start +Dec 20 07:21:57.239: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'calico-node' (0 seconds elapsed) +Dec 20 07:21:57.239: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) +Dec 20 07:21:57.239: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'smokeping' (0 seconds elapsed) +Dec 20 07:21:57.239: INFO: e2e test version: v1.13.0 +Dec 20 07:21:57.241: INFO: kube-apiserver version: v1.13.1 +SSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:21:57.241: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename downward-api +Dec 20 07:21:57.357: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward API volume plugin +Dec 20 07:21:57.373: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ef204c8a-0427-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-downward-api-jdl25" to be "success or failure" +Dec 20 07:22:04.582: INFO: Pod "downwardapi-volume-ef204c8a-0427-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 7.209736671s +Dec 20 07:22:06.589: INFO: Pod "downwardapi-volume-ef204c8a-0427-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.216393257s +STEP: Saw pod success +Dec 20 07:22:06.589: INFO: Pod "downwardapi-volume-ef204c8a-0427-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:22:06.594: INFO: Trying to get logs from node 10-6-155-34 pod downwardapi-volume-ef204c8a-0427-11e9-b141-0a58ac1c1472 container client-container: +STEP: delete the pod +Dec 20 07:22:06.631: INFO: Waiting for pod downwardapi-volume-ef204c8a-0427-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:22:06.636: INFO: Pod downwardapi-volume-ef204c8a-0427-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:22:06.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-downward-api-jdl25" for this suite. +Dec 20 07:22:12.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:22:12.774: INFO: namespace: e2e-tests-downward-api-jdl25, resource: bindings, ignored listing per whitelist +Dec 20 07:22:12.776: INFO: namespace e2e-tests-downward-api-jdl25 deletion completed in 6.135502284s + +• [SLOW TEST:15.535 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Proxy server + should support --unix-socket=/path [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:22:12.776: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[It] should support --unix-socket=/path [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Starting the proxy +Dec 20 07:22:12.894: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-647384748 proxy --unix-socket=/tmp/kubectl-proxy-unix665872923/test' +STEP: retrieving proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:22:14.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-5l9nb" for this suite. +Dec 20 07:22:20.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:22:20.113: INFO: namespace: e2e-tests-kubectl-5l9nb, resource: bindings, ignored listing per whitelist +Dec 20 07:22:20.133: INFO: namespace e2e-tests-kubectl-5l9nb deletion completed in 6.126418119s + +• [SLOW TEST:7.356 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Proxy server + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should support --unix-socket=/path [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod with mountPath of existing file [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Subpath + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:22:20.133: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod with mountPath of existing file [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating pod pod-subpath-test-configmap-vn7z +STEP: Creating a pod to test atomic-volume-subpath +Dec 20 07:22:20.309: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vn7z" in namespace "e2e-tests-subpath-mhjpk" to be "success or failure" +Dec 20 07:22:20.314: INFO: Pod "pod-subpath-test-configmap-vn7z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.877297ms +Dec 20 07:22:22.326: INFO: Pod "pod-subpath-test-configmap-vn7z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017032451s +Dec 20 07:22:24.331: INFO: Pod "pod-subpath-test-configmap-vn7z": Phase="Pending", Reason="", readiness=false. Elapsed: 22.064190254s +Dec 20 07:22:44.378: INFO: Pod "pod-subpath-test-configmap-vn7z": Phase="Running", Reason="", readiness=false. Elapsed: 24.068329873s +Dec 20 07:22:46.382: INFO: Pod "pod-subpath-test-configmap-vn7z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.072555665s +STEP: Saw pod success +Dec 20 07:22:46.382: INFO: Pod "pod-subpath-test-configmap-vn7z" satisfied condition "success or failure" +Dec 20 07:22:46.385: INFO: Trying to get logs from node 10-6-155-34 pod pod-subpath-test-configmap-vn7z container test-container-subpath-configmap-vn7z: +STEP: delete the pod +Dec 20 07:22:46.408: INFO: Waiting for pod pod-subpath-test-configmap-vn7z to disappear +Dec 20 07:22:46.411: INFO: Pod pod-subpath-test-configmap-vn7z no longer exists +STEP: Deleting pod pod-subpath-test-configmap-vn7z +Dec 20 07:22:46.411: INFO: Deleting pod "pod-subpath-test-configmap-vn7z" in namespace "e2e-tests-subpath-mhjpk" +[AfterEach] [sig-storage] Subpath + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:22:46.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-subpath-mhjpk" for this suite. +Dec 20 07:22:52.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:22:52.499: INFO: namespace: e2e-tests-subpath-mhjpk, resource: bindings, ignored listing per whitelist +Dec 20 07:22:52.567: INFO: namespace e2e-tests-subpath-mhjpk deletion completed in 6.144756335s + +• [SLOW TEST:32.434 seconds] +[sig-storage] Subpath +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + Atomic writer volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with configmap pod with mountPath of existing file [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update + should support rolling-update to same image [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:22:52.567: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[BeforeEach] [k8s.io] Kubectl rolling-update + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 +[It] should support rolling-update to same image [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: running the image docker.io/library/nginx:1.14-alpine +Dec 20 07:22:52.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-xvmvk' +Dec 20 07:22:53.473: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" +Dec 20 07:22:53.473: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" +STEP: verifying the rc e2e-test-nginx-rc was created +Dec 20 07:22:53.497: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 +Dec 20 07:22:53.497: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 +STEP: rolling-update to same image controller +Dec 20 07:22:53.507: INFO: scanned /root for discovery docs: +Dec 20 07:22:53.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-xvmvk' +Dec 20 07:23:11.484: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" +Dec 20 07:23:11.484: INFO: stdout: "Created e2e-test-nginx-rc-547d5d0bb219b68440cf1ea39bc8030f\nScaling up e2e-test-nginx-rc-547d5d0bb219b68440cf1ea39bc8030f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-547d5d0bb219b68440cf1ea39bc8030f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-547d5d0bb219b68440cf1ea39bc8030f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" +Dec 20 07:23:11.484: INFO: stdout: "Created e2e-test-nginx-rc-547d5d0bb219b68440cf1ea39bc8030f\nScaling up e2e-test-nginx-rc-547d5d0bb219b68440cf1ea39bc8030f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-547d5d0bb219b68440cf1ea39bc8030f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-547d5d0bb219b68440cf1ea39bc8030f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" +STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. +Dec 20 07:23:11.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-xvmvk' +Dec 20 07:23:11.673: INFO: stderr: "" +Dec 20 07:23:11.673: INFO: stdout: "e2e-test-nginx-rc-547d5d0bb219b68440cf1ea39bc8030f-t7wxl " +Dec 20 07:23:11.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods e2e-test-nginx-rc-547d5d0bb219b68440cf1ea39bc8030f-t7wxl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xvmvk' +Dec 20 07:23:11.923: INFO: stderr: "" +Dec 20 07:23:11.923: INFO: stdout: "true" +Dec 20 07:23:11.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods e2e-test-nginx-rc-547d5d0bb219b68440cf1ea39bc8030f-t7wxl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xvmvk' +Dec 20 07:23:12.137: INFO: stderr: "" +Dec 20 07:23:12.137: INFO: stdout: "docker.io/library/nginx:1.14-alpine" +Dec 20 07:23:12.137: INFO: e2e-test-nginx-rc-547d5d0bb219b68440cf1ea39bc8030f-t7wxl is verified up and running +[AfterEach] [k8s.io] Kubectl rolling-update + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 +Dec 20 07:23:12.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-xvmvk' +Dec 20 07:23:12.308: INFO: stderr: "" +Dec 20 07:23:12.308: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:23:12.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-xvmvk" for this suite. +Dec 20 07:23:18.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:23:18.450: INFO: namespace: e2e-tests-kubectl-xvmvk, resource: bindings, ignored listing per whitelist +Dec 20 07:23:18.456: INFO: namespace e2e-tests-kubectl-xvmvk deletion completed in 6.112143822s + +• [SLOW TEST:25.889 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Kubectl rolling-update + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should support rolling-update to same image [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:23:18.456: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating secret with name secret-test-1f86b6c9-0428-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test consume secrets +Dec 20 07:23:18.572: INFO: Waiting up to 5m0s for pod "pod-secrets-1f873579-0428-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-secrets-sg5nx" to be "success or failure" +Dec 20 07:23:18.575: INFO: Pod "pod-secrets-1f873579-0428-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.959218ms +Dec 20 07:23:22.585: INFO: Pod "pod-secrets-1f873579-0428-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:23:22.589: INFO: Trying to get logs from node 10-6-155-34 pod pod-secrets-1f873579-0428-11e9-b141-0a58ac1c1472 container secret-volume-test: +STEP: delete the pod +Dec 20 07:23:22.609: INFO: Waiting for pod pod-secrets-1f873579-0428-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:23:22.613: INFO: Pod pod-secrets-1f873579-0428-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:23:22.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-secrets-sg5nx" for this suite. +Dec 20 07:23:28.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:23:28.734: INFO: namespace: e2e-tests-secrets-sg5nx, resource: bindings, ignored listing per whitelist +Dec 20 07:23:28.794: INFO: namespace e2e-tests-secrets-sg5nx deletion completed in 6.170993476s + +• [SLOW TEST:10.337 seconds] +[sig-storage] Secrets +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 + should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:23:28.794: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Dec 20 07:23:28.913: INFO: Waiting up to 5m0s for pod "pod-25b1232c-0428-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-emptydir-jzcfj" to be "success or failure" +Dec 20 07:23:28.920: INFO: Pod "pod-25b1232c-0428-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 6.472401ms +Dec 20 07:23:34.938: INFO: Pod "pod-25b1232c-0428-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025331052s +STEP: Saw pod success +Dec 20 07:23:34.939: INFO: Pod "pod-25b1232c-0428-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:23:34.943: INFO: Trying to get logs from node 10-6-155-34 pod pod-25b1232c-0428-11e9-b141-0a58ac1c1472 container test-container: +STEP: delete the pod +Dec 20 07:23:34.969: INFO: Waiting for pod pod-25b1232c-0428-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:23:34.972: INFO: Pod pod-25b1232c-0428-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:23:34.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-emptydir-jzcfj" for this suite. +Dec 20 07:23:40.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:23:41.071: INFO: namespace: e2e-tests-emptydir-jzcfj, resource: bindings, ignored listing per whitelist +Dec 20 07:23:41.136: INFO: namespace e2e-tests-emptydir-jzcfj deletion completed in 6.159065375s + +• [SLOW TEST:12.342 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 + should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide podname only [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:23:41.136: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward API volume plugin +Dec 20 07:23:41.256: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2d0c22e2-0428-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-bwljx" to be "success or failure" +Dec 20 07:23:41.260: INFO: Pod "downwardapi-volume-2d0c22e2-0428-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 3.877634ms +STEP: Saw pod success +Dec 20 07:23:45.270: INFO: Pod "downwardapi-volume-2d0c22e2-0428-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:23:45.276: INFO: Trying to get logs from node 10-6-155-34 pod downwardapi-volume-2d0c22e2-0428-11e9-b141-0a58ac1c1472 container client-container: +STEP: delete the pod +Dec 20 07:23:45.303: INFO: Waiting for pod downwardapi-volume-2d0c22e2-0428-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:23:45.306: INFO: Pod downwardapi-volume-2d0c22e2-0428-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:23:45.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-bwljx" for this suite. +Dec 20 07:23:51.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:23:51.406: INFO: namespace: e2e-tests-projected-bwljx, resource: bindings, ignored listing per whitelist +Dec 20 07:23:51.437: INFO: namespace e2e-tests-projected-bwljx deletion completed in 6.124945784s + +• [SLOW TEST:10.301 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide podname only [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSS +------------------------------ +[sig-storage] ConfigMap + binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:23:51.437: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating configMap with name configmap-test-upd-33376922-0428-11e9-b141-0a58ac1c1472 +STEP: Creating the pod +STEP: Waiting for pod with text data +STEP: Waiting for pod with binary data +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:23:57.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-configmap-2qw24" for this suite. +Dec 20 07:24:19.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:24:19.822: INFO: namespace: e2e-tests-configmap-2qw24, resource: bindings, ignored listing per whitelist +Dec 20 07:24:19.824: INFO: namespace e2e-tests-configmap-2qw24 deletion completed in 22.150119757s + +• [SLOW TEST:28.387 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 + binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-network] Networking + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:24:19.824: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-zgdr8 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Dec 20 07:24:20.074: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +STEP: Creating test pods +Dec 20 07:24:46.170: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.28.20.123:8080/dial?request=hostName&protocol=udp&host=172.28.20.117&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-zgdr8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 20 07:24:46.170: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +Dec 20 07:24:46.675: INFO: Waiting for endpoints: map[] +Dec 20 07:24:46.679: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.28.20.123:8080/dial?request=hostName&protocol=udp&host=172.28.240.112&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-zgdr8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 20 07:24:46.679: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +Dec 20 07:24:46.817: INFO: Waiting for endpoints: map[] +[AfterEach] [sig-network] Networking + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:24:46.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-pod-network-test-zgdr8" for this suite. +Dec 20 07:24:58.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:24:58.963: INFO: namespace: e2e-tests-pod-network-test-zgdr8, resource: bindings, ignored listing per whitelist +Dec 20 07:24:58.999: INFO: namespace e2e-tests-pod-network-test-zgdr8 deletion completed in 12.174191663s + +• [SLOW TEST:39.175 seconds] +[sig-network] Networking +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 + Granular Checks: Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 + should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-node] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:24:58.999: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating configMap e2e-tests-configmap-sfdvm/configmap-test-5b77142e-0428-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test consume configMaps +Dec 20 07:24:59.135: INFO: Waiting up to 5m0s for pod "pod-configmaps-5b77bd4e-0428-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-configmap-sfdvm" to be "success or failure" +Dec 20 07:24:59.144: INFO: Pod "pod-configmaps-5b77bd4e-0428-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 9.019603ms +Dec 20 07:25:01.153: INFO: Pod "pod-configmaps-5b77bd4e-0428-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017916316s +Dec 20 07:25:03.157: INFO: Pod "pod-configmaps-5b77bd4e-0428-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021311656s +STEP: Saw pod success +Dec 20 07:25:03.157: INFO: Pod "pod-configmaps-5b77bd4e-0428-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:25:03.159: INFO: Trying to get logs from node 10-6-155-34 pod pod-configmaps-5b77bd4e-0428-11e9-b141-0a58ac1c1472 container env-test: +STEP: delete the pod +Dec 20 07:25:03.184: INFO: Waiting for pod pod-configmaps-5b77bd4e-0428-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:25:03.193: INFO: Pod pod-configmaps-5b77bd4e-0428-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:25:03.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-configmap-sfdvm" for this suite. +Dec 20 07:25:09.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:25:09.300: INFO: namespace: e2e-tests-configmap-sfdvm, resource: bindings, ignored listing per whitelist +Dec 20 07:25:09.320: INFO: namespace e2e-tests-configmap-sfdvm deletion completed in 6.118190132s + +• [SLOW TEST:10.320 seconds] +[sig-node] ConfigMap +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:25:09.320: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward API volume plugin +Dec 20 07:25:09.426: INFO: Waiting up to 5m0s for pod "downwardapi-volume-619a4493-0428-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-75699" to be "success or failure" +Dec 20 07:25:09.431: INFO: Pod "downwardapi-volume-619a4493-0428-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 5.005264ms +Dec 20 07:25:11.436: INFO: Pod "downwardapi-volume-619a4493-0428-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009157177s +Dec 20 07:25:13.440: INFO: Pod "downwardapi-volume-619a4493-0428-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013151365s +STEP: Saw pod success +Dec 20 07:25:13.440: INFO: Pod "downwardapi-volume-619a4493-0428-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:25:13.443: INFO: Trying to get logs from node 10-6-155-34 pod downwardapi-volume-619a4493-0428-11e9-b141-0a58ac1c1472 container client-container: +STEP: delete the pod +Dec 20 07:25:13.461: INFO: Waiting for pod downwardapi-volume-619a4493-0428-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:25:13.464: INFO: Pod downwardapi-volume-619a4493-0428-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:25:13.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-75699" for this suite. +Dec 20 07:25:19.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:25:19.524: INFO: namespace: e2e-tests-projected-75699, resource: bindings, ignored listing per whitelist +Dec 20 07:25:19.614: INFO: namespace e2e-tests-projected-75699 deletion completed in 6.145037508s + +• [SLOW TEST:10.294 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl logs + should be able to retrieve and filter logs [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:25:19.614: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[BeforeEach] [k8s.io] Kubectl logs + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 +STEP: creating an rc +Dec 20 07:25:19.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 create -f - --namespace=e2e-tests-kubectl-8jvfb' +Dec 20 07:25:20.175: INFO: stderr: "" +Dec 20 07:25:20.175: INFO: stdout: "replicationcontroller/redis-master created\n" +[It] should be able to retrieve and filter logs [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Waiting for Redis master to start. +Dec 20 07:25:21.180: INFO: Selector matched 1 pods for map[app:redis] +Dec 20 07:25:21.180: INFO: Found 0 / 1 +Dec 20 07:25:22.180: INFO: Selector matched 1 pods for map[app:redis] +Dec 20 07:25:22.180: INFO: Found 0 / 1 +Dec 20 07:25:23.179: INFO: Selector matched 1 pods for map[app:redis] +Dec 20 07:25:23.179: INFO: Found 0 / 1 +Dec 20 07:25:24.180: INFO: Selector matched 1 pods for map[app:redis] +Dec 20 07:25:24.180: INFO: Found 1 / 1 +Dec 20 07:25:24.180: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Dec 20 07:25:24.186: INFO: Selector matched 1 pods for map[app:redis] +Dec 20 07:25:24.186: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +STEP: checking for a matching strings +Dec 20 07:25:24.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 logs redis-master-629cv redis-master --namespace=e2e-tests-kubectl-8jvfb' +Dec 20 07:25:24.391: INFO: stderr: "" +Dec 20 07:25:24.391: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 Dec 07:25:23.401 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Dec 07:25:23.402 # Server started, Redis version 3.2.12\n1:M 20 Dec 07:25:23.402 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Dec 07:25:23.402 * The server is now ready to accept connections on port 6379\n" +STEP: limiting log lines +Dec 20 07:25:24.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 log redis-master-629cv redis-master --namespace=e2e-tests-kubectl-8jvfb --tail=1' +Dec 20 07:25:24.606: INFO: stderr: "" +Dec 20 07:25:24.606: INFO: stdout: "1:M 20 Dec 07:25:23.402 * The server is now ready to accept connections on port 6379\n" +STEP: limiting log bytes +Dec 20 07:25:24.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 log redis-master-629cv redis-master --namespace=e2e-tests-kubectl-8jvfb --limit-bytes=1' +Dec 20 07:25:24.905: INFO: stderr: "" +Dec 20 07:25:24.905: INFO: stdout: " " +STEP: exposing timestamps +Dec 20 07:25:24.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 log redis-master-629cv redis-master --namespace=e2e-tests-kubectl-8jvfb --tail=1 --timestamps' +Dec 20 07:25:25.155: INFO: stderr: "" +Dec 20 07:25:25.155: INFO: stdout: "2018-12-20T07:25:23.402559376Z 1:M 20 Dec 07:25:23.402 * The server is now ready to accept connections on port 6379\n" +STEP: restricting to a time range +Dec 20 07:25:27.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 log redis-master-629cv redis-master --namespace=e2e-tests-kubectl-8jvfb --since=1s' +Dec 20 07:25:27.869: INFO: stderr: "" +Dec 20 07:25:27.869: INFO: stdout: "" +Dec 20 07:25:27.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 log redis-master-629cv redis-master --namespace=e2e-tests-kubectl-8jvfb --since=24h' +Dec 20 07:25:28.063: INFO: stderr: "" +Dec 20 07:25:28.063: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 Dec 07:25:23.401 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Dec 07:25:23.402 # Server started, Redis version 3.2.12\n1:M 20 Dec 07:25:23.402 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Dec 07:25:23.402 * The server is now ready to accept connections on port 6379\n" +[AfterEach] [k8s.io] Kubectl logs + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 +STEP: using delete to clean up resources +Dec 20 07:25:28.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8jvfb' +Dec 20 07:25:28.252: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Dec 20 07:25:28.252: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" +Dec 20 07:25:28.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-8jvfb' +Dec 20 07:25:28.416: INFO: stderr: "No resources found.\n" +Dec 20 07:25:28.416: INFO: stdout: "" +Dec 20 07:25:28.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods -l name=nginx --namespace=e2e-tests-kubectl-8jvfb -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Dec 20 07:25:28.588: INFO: stderr: "" +Dec 20 07:25:28.588: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:25:28.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-8jvfb" for this suite. +Dec 20 07:25:34.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:25:34.715: INFO: namespace: e2e-tests-kubectl-8jvfb, resource: bindings, ignored listing per whitelist +Dec 20 07:25:34.776: INFO: namespace e2e-tests-kubectl-8jvfb deletion completed in 6.181742249s + +• [SLOW TEST:15.162 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Kubectl logs + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should be able to retrieve and filter logs [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl patch + should add annotations for pods in rc [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:25:34.776: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[It] should add annotations for pods in rc [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating Redis RC +Dec 20 07:25:34.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 create -f - --namespace=e2e-tests-kubectl-7kcxn' +Dec 20 07:25:35.187: INFO: stderr: "" +Dec 20 07:25:35.187: INFO: stdout: "replicationcontroller/redis-master created\n" +STEP: Waiting for Redis master to start. +Dec 20 07:25:36.192: INFO: Selector matched 1 pods for map[app:redis] +Dec 20 07:25:36.192: INFO: Found 0 / 1 +Dec 20 07:25:37.192: INFO: Selector matched 1 pods for map[app:redis] +Dec 20 07:25:37.192: INFO: Found 0 / 1 +Dec 20 07:25:38.191: INFO: Selector matched 1 pods for map[app:redis] +Dec 20 07:25:38.191: INFO: Found 0 / 1 +Dec 20 07:25:39.192: INFO: Selector matched 1 pods for map[app:redis] +Dec 20 07:25:39.192: INFO: Found 1 / 1 +Dec 20 07:25:39.192: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +STEP: patching all pods +Dec 20 07:25:39.197: INFO: Selector matched 1 pods for map[app:redis] +Dec 20 07:25:39.198: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Dec 20 07:25:39.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 patch pod redis-master-qv8tr --namespace=e2e-tests-kubectl-7kcxn -p {"metadata":{"annotations":{"x":"y"}}}' +Dec 20 07:25:39.335: INFO: stderr: "" +Dec 20 07:25:39.335: INFO: stdout: "pod/redis-master-qv8tr patched\n" +STEP: checking annotations +Dec 20 07:25:39.339: INFO: Selector matched 1 pods for map[app:redis] +Dec 20 07:25:39.339: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:25:39.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-7kcxn" for this suite. +Dec 20 07:26:01.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:26:01.434: INFO: namespace: e2e-tests-kubectl-7kcxn, resource: bindings, ignored listing per whitelist +Dec 20 07:26:01.467: INFO: namespace e2e-tests-kubectl-7kcxn deletion completed in 22.122502602s + +• [SLOW TEST:26.691 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Kubectl patch + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should add annotations for pods in rc [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + Should recreate evicted statefulset [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:26:01.467: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 +STEP: Creating service test in namespace e2e-tests-statefulset-q2pnn +[It] Should recreate evicted statefulset [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Looking for a node to schedule stateful set and pod +STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-q2pnn +STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-q2pnn +STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-q2pnn +STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-q2pnn +Dec 20 07:26:05.621: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-q2pnn, name: ss-0, uid: 82bb65f5-0428-11e9-b07b-0242ac120004, status phase: Pending. Waiting for statefulset controller to delete. +Dec 20 07:26:05.994: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-q2pnn, name: ss-0, uid: 82bb65f5-0428-11e9-b07b-0242ac120004, status phase: Failed. Waiting for statefulset controller to delete. +Dec 20 07:26:06.003: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-q2pnn, name: ss-0, uid: 82bb65f5-0428-11e9-b07b-0242ac120004, status phase: Failed. Waiting for statefulset controller to delete. +Dec 20 07:26:06.013: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-q2pnn +STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-q2pnn +STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-q2pnn and will be in running state +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 +Dec 20 07:26:12.052: INFO: Deleting all statefulset in ns e2e-tests-statefulset-q2pnn +Dec 20 07:26:12.061: INFO: Scaling statefulset ss to 0 +Dec 20 07:26:32.081: INFO: Waiting for statefulset status.replicas updated to 0 +Dec 20 07:26:32.085: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:26:32.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-statefulset-q2pnn" for this suite. +Dec 20 07:26:38.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:26:38.140: INFO: namespace: e2e-tests-statefulset-q2pnn, resource: bindings, ignored listing per whitelist +Dec 20 07:26:38.243: INFO: namespace e2e-tests-statefulset-q2pnn deletion completed in 6.139965949s + +• [SLOW TEST:36.776 seconds] +[sig-apps] StatefulSet +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + Should recreate evicted statefulset [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl run job + should create a job from an image when restart is OnFailure [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:26:38.244: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[BeforeEach] [k8s.io] Kubectl run job + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 +[It] should create a job from an image when restart is OnFailure [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: running the image docker.io/library/nginx:1.14-alpine +Dec 20 07:26:38.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-x49td' +Dec 20 07:26:38.523: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" +Dec 20 07:26:38.523: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" +STEP: verifying the job e2e-test-nginx-job was created +[AfterEach] [k8s.io] Kubectl run job + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 +Dec 20 07:26:38.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-x49td' +Dec 20 07:26:38.689: INFO: stderr: "" +Dec 20 07:26:38.689: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:26:38.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-x49td" for this suite. +Dec 20 07:27:00.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:27:00.755: INFO: namespace: e2e-tests-kubectl-x49td, resource: bindings, ignored listing per whitelist +Dec 20 07:27:00.831: INFO: namespace e2e-tests-kubectl-x49td deletion completed in 22.132811382s + +• [SLOW TEST:22.587 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Kubectl run job + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should create a job from an image when restart is OnFailure [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-api-machinery] Garbage collector + should not be blocked by dependency circle [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:27:00.831: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be blocked by dependency circle [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a4129db1-0428-11e9-b07b-0242ac120004", Controller:(*bool)(0xc00125eee2), BlockOwnerDeletion:(*bool)(0xc00125eee3)}} +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:27:05.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-gc-667pz" for this suite. +Dec 20 07:27:12.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:27:12.068: INFO: namespace: e2e-tests-gc-667pz, resource: bindings, ignored listing per whitelist +Dec 20 07:27:12.122: INFO: namespace e2e-tests-gc-667pz deletion completed in 6.13164779s + +• [SLOW TEST:11.291 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should not be blocked by dependency circle [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,default) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:27:12.122: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,default) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test emptydir 0644 on node default medium +Dec 20 07:27:12.233: INFO: Waiting up to 5m0s for pod "pod-aaccc029-0428-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-emptydir-246k6" to be "success or failure" +STEP: Saw pod success +Dec 20 07:27:16.249: INFO: Pod "pod-aaccc029-0428-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:27:16.253: INFO: Trying to get logs from node 10-6-155-34 pod pod-aaccc029-0428-11e9-b141-0a58ac1c1472 container test-container: +STEP: delete the pod +Dec 20 07:27:16.278: INFO: Waiting for pod pod-aaccc029-0428-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:27:16.282: INFO: Pod pod-aaccc029-0428-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:27:16.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-emptydir-246k6" for this suite. +Dec 20 07:27:22.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:27:22.312: INFO: namespace: e2e-tests-emptydir-246k6, resource: bindings, ignored listing per whitelist +Dec 20 07:27:22.427: INFO: namespace e2e-tests-emptydir-246k6 deletion completed in 6.136645305s + +• [SLOW TEST:10.305 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 + should support (non-root,0644,default) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition + creating/deleting custom resource definition objects works [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:27:22.428: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +[It] creating/deleting custom resource definition objects works [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +Dec 20 07:27:22.535: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:27:23.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-custom-resource-definition-xrxft" for this suite. +Dec 20 07:27:29.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:27:29.774: INFO: namespace: e2e-tests-custom-resource-definition-xrxft, resource: bindings, ignored listing per whitelist +Dec 20 07:27:29.851: INFO: namespace e2e-tests-custom-resource-definition-xrxft deletion completed in 6.16311687s + +• [SLOW TEST:7.424 seconds] +[sig-api-machinery] CustomResourceDefinition resources +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + Simple CustomResourceDefinition + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 + creating/deleting custom resource definition objects works [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:27:29.851: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Dec 20 07:27:30.044: INFO: Waiting up to 5m0s for pod "pod-b56a67dd-0428-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-emptydir-p7k2l" to be "success or failure" +Dec 20 07:27:30.048: INFO: Pod "pod-b56a67dd-0428-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 3.684283ms +Dec 20 07:27:32.058: INFO: Pod "pod-b56a67dd-0428-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013962294s +Dec 20 07:27:34.064: INFO: Pod "pod-b56a67dd-0428-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019293508s +STEP: Saw pod success +Dec 20 07:27:34.064: INFO: Pod "pod-b56a67dd-0428-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:27:34.067: INFO: Trying to get logs from node 10-6-155-34 pod pod-b56a67dd-0428-11e9-b141-0a58ac1c1472 container test-container: +STEP: delete the pod +Dec 20 07:27:34.103: INFO: Waiting for pod pod-b56a67dd-0428-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:27:34.106: INFO: Pod pod-b56a67dd-0428-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:27:34.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-emptydir-p7k2l" for this suite. +Dec 20 07:27:40.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:27:40.241: INFO: namespace: e2e-tests-emptydir-p7k2l, resource: bindings, ignored listing per whitelist +Dec 20 07:27:40.288: INFO: namespace e2e-tests-emptydir-p7k2l deletion completed in 6.171006247s + +• [SLOW TEST:10.437 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 + should support (root,0666,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] ReplicationController + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:27:40.288: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating replication controller my-hostname-basic-bb9b984f-0428-11e9-b141-0a58ac1c1472 +Dec 20 07:27:40.432: INFO: Pod name my-hostname-basic-bb9b984f-0428-11e9-b141-0a58ac1c1472: Found 0 pods out of 1 +Dec 20 07:27:45.436: INFO: Pod name my-hostname-basic-bb9b984f-0428-11e9-b141-0a58ac1c1472: Found 1 pods out of 1 +Dec 20 07:27:45.437: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-bb9b984f-0428-11e9-b141-0a58ac1c1472" are running +Dec 20 07:27:45.439: INFO: Pod "my-hostname-basic-bb9b984f-0428-11e9-b141-0a58ac1c1472-8p22z" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-12-20 07:27:40 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-12-20 07:27:43 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-12-20 07:27:43 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-12-20 07:27:40 +0000 UTC Reason: Message:}]) +Dec 20 07:27:45.439: INFO: Trying to dial the pod +Dec 20 07:27:50.453: INFO: Controller my-hostname-basic-bb9b984f-0428-11e9-b141-0a58ac1c1472: Got expected result from replica 1 [my-hostname-basic-bb9b984f-0428-11e9-b141-0a58ac1c1472-8p22z]: "my-hostname-basic-bb9b984f-0428-11e9-b141-0a58ac1c1472-8p22z", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicationController + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:27:50.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-replication-controller-28cpl" for this suite. +Dec 20 07:27:56.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:27:56.574: INFO: namespace: e2e-tests-replication-controller-28cpl, resource: bindings, ignored listing per whitelist +Dec 20 07:27:56.587: INFO: namespace e2e-tests-replication-controller-28cpl deletion completed in 6.124233316s + +• [SLOW TEST:16.298 seconds] +[sig-apps] ReplicationController +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSS +------------------------------ +[sig-api-machinery] Garbage collector + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:27:56.587: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: create the rc1 +STEP: create the rc2 +STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well +STEP: delete the rc simpletest-rc-to-be-deleted +STEP: wait for the rc to be deleted +STEP: Gathering metrics +W1220 07:28:06.861301 17 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. +Dec 20 07:28:06.861: INFO: For apiserver_request_count: +For apiserver_request_latencies_summary: +For etcd_helper_cache_entry_count: +For etcd_helper_cache_hit_count: +For etcd_helper_cache_miss_count: +For etcd_request_cache_add_latencies_summary: +For etcd_request_cache_get_latencies_summary: +For etcd_request_latencies_summary: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:28:06.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-gc-jgxdr" for this suite. +Dec 20 07:28:12.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:28:12.949: INFO: namespace: e2e-tests-gc-jgxdr, resource: bindings, ignored listing per whitelist +Dec 20 07:28:13.155: INFO: namespace e2e-tests-gc-jgxdr deletion completed in 6.28776552s + +• [SLOW TEST:16.569 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:28:13.156: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 +[It] should invoke init containers on a RestartAlways pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating the pod +Dec 20 07:28:13.387: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:28:19.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-init-container-tqjww" for this suite. +Dec 20 07:28:41.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:28:41.330: INFO: namespace: e2e-tests-init-container-tqjww, resource: bindings, ignored listing per whitelist +Dec 20 07:28:41.431: INFO: namespace e2e-tests-init-container-tqjww deletion completed in 22.124784069s + +• [SLOW TEST:28.275 seconds] +[k8s.io] InitContainer [NodeConformance] +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSS +------------------------------ +[k8s.io] Probing container + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:28:41.431: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 +[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-9mnjs +Dec 20 07:28:45.539: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-9mnjs +STEP: checking the pod's current state and verifying that restartCount is present +Dec 20 07:28:45.543: INFO: Initial restart count of pod liveness-exec is 0 +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:32:46.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-container-probe-9mnjs" for this suite. +Dec 20 07:32:52.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:32:52.226: INFO: namespace: e2e-tests-container-probe-9mnjs, resource: bindings, ignored listing per whitelist +Dec 20 07:32:52.279: INFO: namespace e2e-tests-container-probe-9mnjs deletion completed in 6.130120188s + +• [SLOW TEST:250.848 seconds] +[k8s.io] Probing container +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:32:52.279: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating configMap with name configmap-test-volume-758c5e20-0429-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test consume configMaps +Dec 20 07:32:52.391: INFO: Waiting up to 5m0s for pod "pod-configmaps-758ce5a2-0429-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-configmap-pjpgf" to be "success or failure" +Dec 20 07:32:52.395: INFO: Pod "pod-configmaps-758ce5a2-0429-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 3.229568ms +Dec 20 07:32:54.401: INFO: Pod "pod-configmaps-758ce5a2-0429-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009040513s +Dec 20 07:32:56.414: INFO: Pod "pod-configmaps-758ce5a2-0429-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022247061s +STEP: Saw pod success +Dec 20 07:32:56.414: INFO: Pod "pod-configmaps-758ce5a2-0429-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:32:56.426: INFO: Trying to get logs from node 10-6-155-34 pod pod-configmaps-758ce5a2-0429-11e9-b141-0a58ac1c1472 container configmap-volume-test: +STEP: delete the pod +Dec 20 07:32:56.463: INFO: Waiting for pod pod-configmaps-758ce5a2-0429-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:32:56.468: INFO: Pod pod-configmaps-758ce5a2-0429-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:32:56.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-configmap-pjpgf" for this suite. +Dec 20 07:33:02.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:33:02.530: INFO: namespace: e2e-tests-configmap-pjpgf, resource: bindings, ignored listing per whitelist +Dec 20 07:33:02.606: INFO: namespace e2e-tests-configmap-pjpgf deletion completed in 6.130986487s + +• [SLOW TEST:10.326 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSS +------------------------------ +[sig-api-machinery] Watchers + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:33:02.606: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating a watch on configmaps with label A +STEP: creating a watch on configmaps with label B +STEP: creating a watch on configmaps with label A or B +STEP: creating a configmap with label A and ensuring the correct watchers observe the notification +Dec 20 07:33:02.717: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vpwkm,SelfLink:/api/v1/namespaces/e2e-tests-watch-vpwkm/configmaps/e2e-watch-test-configmap-a,UID:7bb3d692-0429-11e9-b07b-0242ac120004,ResourceVersion:951495,Generation:0,CreationTimestamp:2018-12-20 07:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} +Dec 20 07:33:02.718: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vpwkm,SelfLink:/api/v1/namespaces/e2e-tests-watch-vpwkm/configmaps/e2e-watch-test-configmap-a,UID:7bb3d692-0429-11e9-b07b-0242ac120004,ResourceVersion:951495,Generation:0,CreationTimestamp:2018-12-20 07:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} +STEP: modifying configmap A and ensuring the correct watchers observe the notification +Dec 20 07:33:12.726: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vpwkm,SelfLink:/api/v1/namespaces/e2e-tests-watch-vpwkm/configmaps/e2e-watch-test-configmap-a,UID:7bb3d692-0429-11e9-b07b-0242ac120004,ResourceVersion:951509,Generation:0,CreationTimestamp:2018-12-20 07:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +Dec 20 07:33:12.726: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vpwkm,SelfLink:/api/v1/namespaces/e2e-tests-watch-vpwkm/configmaps/e2e-watch-test-configmap-a,UID:7bb3d692-0429-11e9-b07b-0242ac120004,ResourceVersion:951509,Generation:0,CreationTimestamp:2018-12-20 07:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +STEP: modifying configmap A again and ensuring the correct watchers observe the notification +Dec 20 07:33:22.736: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vpwkm,SelfLink:/api/v1/namespaces/e2e-tests-watch-vpwkm/configmaps/e2e-watch-test-configmap-a,UID:7bb3d692-0429-11e9-b07b-0242ac120004,ResourceVersion:951524,Generation:0,CreationTimestamp:2018-12-20 07:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Dec 20 07:33:22.736: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vpwkm,SelfLink:/api/v1/namespaces/e2e-tests-watch-vpwkm/configmaps/e2e-watch-test-configmap-a,UID:7bb3d692-0429-11e9-b07b-0242ac120004,ResourceVersion:951524,Generation:0,CreationTimestamp:2018-12-20 07:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +STEP: deleting configmap A and ensuring the correct watchers observe the notification +Dec 20 07:33:32.744: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vpwkm,SelfLink:/api/v1/namespaces/e2e-tests-watch-vpwkm/configmaps/e2e-watch-test-configmap-a,UID:7bb3d692-0429-11e9-b07b-0242ac120004,ResourceVersion:951538,Generation:0,CreationTimestamp:2018-12-20 07:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Dec 20 07:33:32.744: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vpwkm,SelfLink:/api/v1/namespaces/e2e-tests-watch-vpwkm/configmaps/e2e-watch-test-configmap-a,UID:7bb3d692-0429-11e9-b07b-0242ac120004,ResourceVersion:951538,Generation:0,CreationTimestamp:2018-12-20 07:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +STEP: creating a configmap with label B and ensuring the correct watchers observe the notification +Dec 20 07:33:42.751: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-vpwkm,SelfLink:/api/v1/namespaces/e2e-tests-watch-vpwkm/configmaps/e2e-watch-test-configmap-b,UID:939056e0-0429-11e9-b07b-0242ac120004,ResourceVersion:951552,Generation:0,CreationTimestamp:2018-12-20 07:33:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} +Dec 20 07:33:42.751: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-vpwkm,SelfLink:/api/v1/namespaces/e2e-tests-watch-vpwkm/configmaps/e2e-watch-test-configmap-b,UID:939056e0-0429-11e9-b07b-0242ac120004,ResourceVersion:951552,Generation:0,CreationTimestamp:2018-12-20 07:33:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} +STEP: deleting configmap B and ensuring the correct watchers observe the notification +Dec 20 07:33:52.759: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-vpwkm,SelfLink:/api/v1/namespaces/e2e-tests-watch-vpwkm/configmaps/e2e-watch-test-configmap-b,UID:939056e0-0429-11e9-b07b-0242ac120004,ResourceVersion:951566,Generation:0,CreationTimestamp:2018-12-20 07:33:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} +Dec 20 07:33:52.759: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-vpwkm,SelfLink:/api/v1/namespaces/e2e-tests-watch-vpwkm/configmaps/e2e-watch-test-configmap-b,UID:939056e0-0429-11e9-b07b-0242ac120004,ResourceVersion:951566,Generation:0,CreationTimestamp:2018-12-20 07:33:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} +[AfterEach] [sig-api-machinery] Watchers + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:34:02.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-watch-vpwkm" for this suite. +Dec 20 07:34:08.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:34:08.845: INFO: namespace: e2e-tests-watch-vpwkm, resource: bindings, ignored listing per whitelist +Dec 20 07:34:08.903: INFO: namespace e2e-tests-watch-vpwkm deletion completed in 6.137806937s + +• [SLOW TEST:66.297 seconds] +[sig-api-machinery] Watchers +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete RS created by deployment when not orphaning [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:34:08.903: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete RS created by deployment when not orphaning [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for all rs to be garbage collected +STEP: expected 0 pods, got 2 pods +STEP: Gathering metrics +W1220 07:34:09.630820 17 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. +Dec 20 07:34:09.630: INFO: For apiserver_request_count: +For apiserver_request_latencies_summary: +For etcd_helper_cache_entry_count: +For etcd_helper_cache_hit_count: +For etcd_helper_cache_miss_count: +For etcd_request_cache_add_latencies_summary: +For etcd_request_cache_get_latencies_summary: +For etcd_request_latencies_summary: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:34:09.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-gc-hdvk2" for this suite. +Dec 20 07:34:15.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:34:15.754: INFO: namespace: e2e-tests-gc-hdvk2, resource: bindings, ignored listing per whitelist +Dec 20 07:34:15.792: INFO: namespace e2e-tests-gc-hdvk2 deletion completed in 6.155489719s + +• [SLOW TEST:6.889 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should delete RS created by deployment when not orphaning [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl run default + should create an rc or deployment from an image [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:34:15.792: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[BeforeEach] [k8s.io] Kubectl run default + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 +[It] should create an rc or deployment from an image [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: running the image docker.io/library/nginx:1.14-alpine +Dec 20 07:34:15.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-gxf69' +Dec 20 07:34:16.338: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" +Dec 20 07:34:16.338: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" +STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created +[AfterEach] [k8s.io] Kubectl run default + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 +Dec 20 07:34:16.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-gxf69' +Dec 20 07:34:16.557: INFO: stderr: "" +Dec 20 07:34:16.557: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:34:16.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-gxf69" for this suite. +Dec 20 07:34:22.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:34:22.682: INFO: namespace: e2e-tests-kubectl-gxf69, resource: bindings, ignored listing per whitelist +Dec 20 07:34:22.727: INFO: namespace e2e-tests-kubectl-gxf69 deletion completed in 6.158036975s + +• [SLOW TEST:6.935 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Kubectl run default + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should create an rc or deployment from an image [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +S +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:34:22.727: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward API volume plugin +Dec 20 07:34:22.842: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab7613d0-0429-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-downward-api-vvzv4" to be "success or failure" +Dec 20 07:34:22.846: INFO: Pod "downwardapi-volume-ab7613d0-0429-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086034ms +Dec 20 07:34:24.853: INFO: Pod "downwardapi-volume-ab7613d0-0429-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011399784s +Dec 20 07:34:26.867: INFO: Pod "downwardapi-volume-ab7613d0-0429-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025691464s +STEP: Saw pod success +Dec 20 07:34:26.867: INFO: Pod "downwardapi-volume-ab7613d0-0429-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:34:26.871: INFO: Trying to get logs from node 10-6-155-34 pod downwardapi-volume-ab7613d0-0429-11e9-b141-0a58ac1c1472 container client-container: +STEP: delete the pod +Dec 20 07:34:26.915: INFO: Waiting for pod downwardapi-volume-ab7613d0-0429-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:34:26.921: INFO: Pod downwardapi-volume-ab7613d0-0429-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:34:26.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-downward-api-vvzv4" for this suite. +Dec 20 07:34:32.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:34:33.068: INFO: namespace: e2e-tests-downward-api-vvzv4, resource: bindings, ignored listing per whitelist +Dec 20 07:34:33.099: INFO: namespace e2e-tests-downward-api-vvzv4 deletion completed in 6.153846868s + +• [SLOW TEST:10.371 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-node] Downward API + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:34:33.101: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward api env vars +Dec 20 07:34:33.240: INFO: Waiting up to 5m0s for pod "downward-api-b1a8df76-0429-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-downward-api-x7tlx" to be "success or failure" +Dec 20 07:34:33.243: INFO: Pod "downward-api-b1a8df76-0429-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 3.402927ms +Dec 20 07:34:35.256: INFO: Pod "downward-api-b1a8df76-0429-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015737158s +Dec 20 07:34:37.260: INFO: Pod "downward-api-b1a8df76-0429-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020589518s +STEP: Saw pod success +Dec 20 07:34:37.261: INFO: Pod "downward-api-b1a8df76-0429-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:34:37.264: INFO: Trying to get logs from node 10-6-155-34 pod downward-api-b1a8df76-0429-11e9-b141-0a58ac1c1472 container dapi-container: +STEP: delete the pod +Dec 20 07:34:37.287: INFO: Waiting for pod downward-api-b1a8df76-0429-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:34:37.297: INFO: Pod downward-api-b1a8df76-0429-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:34:37.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-downward-api-x7tlx" for this suite. +Dec 20 07:34:43.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:34:43.351: INFO: namespace: e2e-tests-downward-api-x7tlx, resource: bindings, ignored listing per whitelist +Dec 20 07:34:43.499: INFO: namespace e2e-tests-downward-api-x7tlx deletion completed in 6.192875296s + +• [SLOW TEST:10.398 seconds] +[sig-node] Downward API +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod + should have an terminated reason [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:34:43.500: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 +[It] should have an terminated reason [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[AfterEach] [k8s.io] Kubelet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:34:47.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubelet-test-9bx7s" for this suite. +Dec 20 07:34:53.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:34:53.862: INFO: namespace: e2e-tests-kubelet-test-9bx7s, resource: bindings, ignored listing per whitelist +Dec 20 07:34:53.916: INFO: namespace e2e-tests-kubelet-test-9bx7s deletion completed in 6.230187276s + +• [SLOW TEST:10.417 seconds] +[k8s.io] Kubelet +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + when scheduling a busybox command that always fails in a pod + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 + should have an terminated reason [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSS +------------------------------ +[sig-network] DNS + should provide DNS for services [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-network] DNS + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:34:53.917: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for services [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-jln2c A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-jln2c;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-jln2c A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-jln2c.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-jln2c.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-jln2c.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-jln2c.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-jln2c.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-jln2c.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 129.11.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.11.129_udp@PTR;check="$$(dig +tcp +noall +answer +search 129.11.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.11.129_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-jln2c A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-jln2c;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-jln2c A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-jln2c;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-jln2c.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-jln2c.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-jln2c.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-jln2c.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-jln2c.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-jln2c.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-jln2c.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 129.11.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.11.129_udp@PTR;check="$$(dig +tcp +noall +answer +search 129.11.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.11.129_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Dec 20 07:35:00.105: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.115: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.129: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.134: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.139: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.147: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.153: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.159: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.163: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.170: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.175: INFO: Unable to read 10.101.11.129_udp@PTR from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.183: INFO: Unable to read 10.101.11.129_tcp@PTR from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.191: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.195: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.200: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jln2c from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.214: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jln2c from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.220: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.225: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.229: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.235: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.239: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.243: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.250: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.265: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.276: INFO: Unable to read 10.101.11.129_udp@PTR from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.283: INFO: Unable to read 10.101.11.129_tcp@PTR from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:00.283: INFO: Lookups using e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472 failed for: [wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c wheezy_udp@dns-test-service.e2e-tests-dns-jln2c.svc wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.101.11.129_udp@PTR 10.101.11.129_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-jln2c jessie_tcp@dns-test-service.e2e-tests-dns-jln2c jessie_udp@dns-test-service.e2e-tests-dns-jln2c.svc jessie_tcp@dns-test-service.e2e-tests-dns-jln2c.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.101.11.129_udp@PTR 10.101.11.129_tcp@PTR] + +Dec 20 07:35:05.299: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:05.323: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:05.328: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:05.333: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:05.339: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:05.343: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:05.348: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:05.356: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:05.382: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:05.394: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:05.405: INFO: Unable to read 10.101.11.129_udp@PTR from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:05.419: INFO: Unable to read 10.101.11.129_tcp@PTR from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:05.425: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:05.435: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:05.443: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jln2c from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:05.451: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jln2c from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:05.458: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:05.464: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:05.472: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:05.486: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:05.509: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:05.569: INFO: Lookups using e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472 failed for: [wheezy_tcp@dns-test-service wheezy_tcp@PodARecord 10.101.11.129_udp@PTR 10.101.11.129_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-jln2c jessie_tcp@dns-test-service.e2e-tests-dns-jln2c jessie_udp@dns-test-service.e2e-tests-dns-jln2c.svc jessie_tcp@dns-test-service.e2e-tests-dns-jln2c.svc + +Dec 20 07:35:10.299: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:10.309: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c from pod ewheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:10.352: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:10.359: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:10.367: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get +Dec 20 07:35:10.448: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jln2c from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:10.453: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jln2c from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:10.461: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:10.466: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:10.471: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:10.477: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:10.483: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:10.487: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:10.492: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:10.496: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-jln2c/ +Dec 20 07:35:10.507: INFO: Lookups using e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472 failed for: [wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c wheezy_udp@dns-test-service.e2e-tests-dns-jln2c.svc wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.101.11.129_udp@PTR 10.101.11.129_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-jln2c + +Dec 20 07:35:15.296: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:15.365: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:15.371: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:15.378: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:15.384: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:15.423: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jln2c from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:15.431: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jln2c from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:15.460: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:15.467: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:15.477: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:15.482: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:15.489: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:15.495: INFO: Unable to read 10.101.11.129_udp@PTR from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:15.500: INFO: Unable to read 10.101.11.129_tcp@PTR from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:15.500: INFO: Lookups using e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472 failed for: [wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c wheezy_udp@dns-test-service.e2e-tests-dns-jln2c.svc wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.101.11.129_udp@PTR 10.101.11.129_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-jln2c jessie_tcp@dns-test-service.e2e-tests-dns-jln2c jessie_udp@dns-test-service.e2e-tests-dns-jln2c.svc jessie_tcp@dns-test-service.e2e-tests-dns-jln2c.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.101.11.129_udp@PTR 10.101.11.129_tcp@PTR] + +Dec 20 07:35:20.303: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:20.319: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:20.326: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:20.333: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:20.340: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:20.344: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:20.438: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:20.445: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:20.456: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:20.463: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:20.480: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:20.487: INFO: Unable to read 10.101.11.129_udp@PTR from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:20.494: INFO: Unable to read 10.101.11.129_tcp@PTR from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:20.494: INFO: Lookups using e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472 failed for: [wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c wheezy_udp@dns-test-service.e2e-tests-dns-jln2c.svc wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.101.11.129_udp@PTR 10.101.11.129_tcp@PTR jessie_udp@dns-test-service + +Dec 20 07:35:25.300: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.310: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.315: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.322: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.328: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.335: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.343: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.347: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.351: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.356: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.360: INFO: Unable to read 10.101.11.129_udp@PTR from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.363: INFO: Unable to read 10.101.11.129_tcp@PTR from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.370: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.375: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.381: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jln2c from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.386: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jln2c from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.394: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.400: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.407: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.413: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.421: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.427: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.433: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.440: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.444: INFO: Unable to read 10.101.11.129_udp@PTR from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.448: INFO: Unable to read 10.101.11.129_tcp@PTR from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:25.448: INFO: Lookups using e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472 failed for: [wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c wheezy_udp@dns-test-service.e2e-tests-dns-jln2c.svc wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.101.11.129_udp@PTR 10.101.11.129_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-jln2c jessie_tcp@dns-test-service.e2e-tests-dns-jln2c jessie_udp@dns-test-service.e2e-tests-dns-jln2c.svc jessie_tcp@dns-test-service.e2e-tests-dns-jln2c.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.101.11.129_udp@PTR 10.101.11.129_tcp@PTR] + +Dec 20 07:35:30.298: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:30.312: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:30.357: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:30.362: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:30.369: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jln2c from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:30.373: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jln2c from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:30.379: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:30.385: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:30.392: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:30.398: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:30.402: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:30.407: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:30.415: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:30.420: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472) +Dec 20 07:35:30.439: INFO: Lookups using e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472 failed for: [wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-jln2c jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-jln2c jessie_tcp@dns-test-service.e2e-tests-dns-jln2c jessie_udp@dns-test-service.e2e-tests-dns-jln2c.svc jessie_tcp@dns-test-service.e2e-tests-dns-jln2c.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-jln2c.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-jln2c.svc jessie_udp@PodARecord jessie_tcp@PodARecord] + +Dec 20 07:35:35.433: INFO: DNS probes using e2e-tests-dns-jln2c/dns-test-be11fb0f-0429-11e9-b141-0a58ac1c1472 succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:35:35.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-dns-jln2c" for this suite. +Dec 20 07:35:41.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:35:41.607: INFO: namespace: e2e-tests-dns-jln2c, resource: bindings, ignored listing per whitelist +Dec 20 07:35:41.685: INFO: namespace e2e-tests-dns-jln2c deletion completed in 6.160177518s + +• [SLOW TEST:47.768 seconds] +[sig-network] DNS +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should provide DNS for services [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl api-versions + should check if v1 is in available api versions [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:35:41.685: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[It] should check if v1 is in available api versions [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: validating api versions +Dec 20 07:35:41.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 api-versions' +Dec 20 07:35:41.955: INFO: stderr: "" +Dec 20 07:35:41.955: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\nbatch/v2alpha1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:35:41.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-5h2hp" for this suite. +Dec 20 07:35:47.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:35:47.997: INFO: namespace: e2e-tests-kubectl-5h2hp, resource: bindings, ignored listing per whitelist +Dec 20 07:35:48.108: INFO: namespace e2e-tests-kubectl-5h2hp deletion completed in 6.139408s + +• [SLOW TEST:6.424 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Kubectl api-versions + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should check if v1 is in available api versions [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SS +------------------------------ +[k8s.io] Pods + should get a host IP [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:35:48.109: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 +[It] should get a host IP [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating pod +Dec 20 07:35:52.290: INFO: Pod pod-hostip-de6210e4-0429-11e9-b141-0a58ac1c1472 has hostIP: 10.6.155.34 +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:35:52.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-pods-m2vpn" for this suite. +Dec 20 07:36:14.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:36:14.338: INFO: namespace: e2e-tests-pods-m2vpn, resource: bindings, ignored listing per whitelist +Dec 20 07:36:14.421: INFO: namespace e2e-tests-pods-m2vpn deletion completed in 22.119048633s + +• [SLOW TEST:26.312 seconds] +[k8s.io] Pods +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should get a host IP [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +S +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:36:14.421: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating secret with name secret-test-ee116dfa-0429-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test consume secrets +Dec 20 07:36:14.602: INFO: Waiting up to 5m0s for pod "pod-secrets-ee127f01-0429-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-secrets-mjmkw" to be "success or failure" +Dec 20 07:36:14.606: INFO: Pod "pod-secrets-ee127f01-0429-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118912ms +Dec 20 07:36:18.617: INFO: Pod "pod-secrets-ee127f01-0429-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:36:18.621: INFO: Trying to get logs from node 10-6-155-34 pod pod-secrets-ee127f01-0429-11e9-b141-0a58ac1c1472 container secret-volume-test: +STEP: delete the pod +Dec 20 07:36:18.644: INFO: Waiting for pod pod-secrets-ee127f01-0429-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:36:18.648: INFO: Pod pod-secrets-ee127f01-0429-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:36:18.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-secrets-mjmkw" for this suite. +Dec 20 07:36:24.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:36:24.734: INFO: namespace: e2e-tests-secrets-mjmkw, resource: bindings, ignored listing per whitelist +Dec 20 07:36:24.773: INFO: namespace e2e-tests-secrets-mjmkw deletion completed in 6.116601007s + +• [SLOW TEST:10.352 seconds] +[sig-storage] Secrets +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSS +------------------------------ +[sig-apps] Deployment + RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:36:24.773: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 +[It] RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +Dec 20 07:36:24.874: INFO: Creating deployment "test-recreate-deployment" +Dec 20 07:36:24.895: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 +Dec 20 07:36:24.908: INFO: Waiting deployment "test-recreate-deployment" to complete +Dec 20 07:36:24.915: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63680888184, loc:(*time.Location)(0x7b33b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63680888184, loc:(*time.Location)(0x7b33b80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not +Dec 20 07:36:28.919: INFO: Triggering a new rollout for deployment "test-recreate-deployment" +Dec 20 07:36:28.930: INFO: Updating deployment test-recreate-deployment +Dec 20 07:36:28.930: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 +Dec 20 07:36:29.011: INFO: Deployment "test-recreate-deployment": +&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-8lh72,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8lh72/deployments/test-recreate-deployment,UID:f433884b-0429-11e9-b07b-0242ac120004,ResourceVersion:952131,Generation:2,CreationTimestamp:2018-12-20 07:36:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2018-12-20 07:36:28 +0000 UTC 2018-12-20 07:36:28 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2018-12-20 07:36:28 +0000 UTC 2018-12-20 07:36:24 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-697fbf54bf" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} + +Dec 20 07:36:29.018: INFO: New ReplicaSet "test-recreate-deployment-697fbf54bf" of Deployment "test-recreate-deployment": +Dec 20 07:36:29.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-deployment-8lh72" for this suite. +Dec 20 07:36:35.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:36:35.194: INFO: namespace: e2e-tests-deployment-8lh72, resource: bindings, ignored listing per whitelist +Dec 20 07:36:35.221: INFO: namespace e2e-tests-deployment-8lh72 deletion completed in 6.185891566s + +• [SLOW TEST:10.449 seconds] +[sig-apps] Deployment +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Secrets + should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-api-machinery] Secrets + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:36:35.222: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating secret with name secret-test-fa7757a7-0429-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test consume secrets +Dec 20 07:36:35.409: INFO: Waiting up to 5m0s for pod "pod-secrets-fa7872cf-0429-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-secrets-6sk8x" to be "success or failure" +Dec 20 07:36:35.415: INFO: Pod "pod-secrets-fa7872cf-0429-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035421ms +Dec 20 07:36:37.421: INFO: Pod "pod-secrets-fa7872cf-0429-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012352152s +Dec 20 07:36:39.428: INFO: Pod "pod-secrets-fa7872cf-0429-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018751619s +STEP: Saw pod success +Dec 20 07:36:39.428: INFO: Pod "pod-secrets-fa7872cf-0429-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:36:39.431: INFO: Trying to get logs from node 10-6-155-34 pod pod-secrets-fa7872cf-0429-11e9-b141-0a58ac1c1472 container secret-env-test: +STEP: delete the pod +Dec 20 07:36:39.451: INFO: Waiting for pod pod-secrets-fa7872cf-0429-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:36:39.456: INFO: Pod pod-secrets-fa7872cf-0429-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-api-machinery] Secrets + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:36:39.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-secrets-6sk8x" for this suite. +Dec 20 07:36:45.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:36:45.510: INFO: namespace: e2e-tests-secrets-6sk8x, resource: bindings, ignored listing per whitelist +Dec 20 07:36:45.670: INFO: namespace e2e-tests-secrets-6sk8x deletion completed in 6.204312525s + +• [SLOW TEST:10.448 seconds] +[sig-api-machinery] Secrets +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 + should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +S +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:36:45.671: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 +[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating the pod +Dec 20 07:36:45.834: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:36:49.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-init-container-tjc9s" for this suite. +Dec 20 07:36:55.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:36:55.973: INFO: namespace: e2e-tests-init-container-tjc9s, resource: bindings, ignored listing per whitelist +Dec 20 07:36:56.120: INFO: namespace e2e-tests-init-container-tjc9s deletion completed in 6.210073708s + +• [SLOW TEST:10.449 seconds] +[k8s.io] InitContainer [NodeConformance] +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SS +------------------------------ +[k8s.io] Probing container + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:36:56.120: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 +[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-nvlgb +Dec 20 07:37:00.255: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-nvlgb +STEP: checking the pod's current state and verifying that restartCount is present +Dec 20 07:37:00.258: INFO: Initial restart count of pod liveness-exec is 0 +Dec 20 07:37:52.425: INFO: Restart count of pod e2e-tests-container-probe-nvlgb/liveness-exec is now 1 (52.167780686s elapsed) +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:37:52.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-container-probe-nvlgb" for this suite. +Dec 20 07:37:58.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:37:58.585: INFO: namespace: e2e-tests-container-probe-nvlgb, resource: bindings, ignored listing per whitelist +Dec 20 07:37:58.621: INFO: namespace e2e-tests-container-probe-nvlgb deletion completed in 6.177709455s + +• [SLOW TEST:62.501 seconds] +[k8s.io] Probing container +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSS +------------------------------ +[sig-storage] Projected combined + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected combined + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:37:58.621: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating configMap with name configmap-projected-all-test-volume-2c275cc6-042a-11e9-b141-0a58ac1c1472 +STEP: Creating secret with name secret-projected-all-test-volume-2c275ca3-042a-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test Check all projections for projected volume plugin +Dec 20 07:37:58.765: INFO: Waiting up to 5m0s for pod "projected-volume-2c275c4f-042a-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-cpzlj" to be "success or failure" +Dec 20 07:37:58.771: INFO: Pod "projected-volume-2c275c4f-042a-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 5.38765ms +Dec 20 07:38:00.779: INFO: Pod "projected-volume-2c275c4f-042a-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013899753s +Dec 20 07:38:02.791: INFO: Pod "projected-volume-2c275c4f-042a-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026095818s +STEP: Saw pod success +Dec 20 07:38:02.791: INFO: Pod "projected-volume-2c275c4f-042a-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:38:02.796: INFO: Trying to get logs from node 10-6-155-34 pod projected-volume-2c275c4f-042a-11e9-b141-0a58ac1c1472 container projected-all-volume-test: +STEP: delete the pod +Dec 20 07:38:02.835: INFO: Waiting for pod projected-volume-2c275c4f-042a-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:38:02.842: INFO: Pod projected-volume-2c275c4f-042a-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Projected combined + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:38:02.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-cpzlj" for this suite. +Dec 20 07:38:08.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:38:09.009: INFO: namespace: e2e-tests-projected-cpzlj, resource: bindings, ignored listing per whitelist +Dec 20 07:38:09.065: INFO: namespace e2e-tests-projected-cpzlj deletion completed in 6.184663551s + +• [SLOW TEST:10.444 seconds] +[sig-storage] Projected combined +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:38:09.065: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating projection with secret that has name projected-secret-test-map-3264d781-042a-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test consume secrets +Dec 20 07:38:09.230: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3265f85a-042a-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-f4mhj" to be "success or failure" +Dec 20 07:38:09.235: INFO: Pod "pod-projected-secrets-3265f85a-042a-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.732539ms +Dec 20 07:38:11.241: INFO: Pod "pod-projected-secrets-3265f85a-042a-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01075226s +Dec 20 07:38:13.247: INFO: Pod "pod-projected-secrets-3265f85a-042a-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017012433s +STEP: Saw pod success +Dec 20 07:38:13.247: INFO: Pod "pod-projected-secrets-3265f85a-042a-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:38:13.251: INFO: Trying to get logs from node 10-6-155-34 pod pod-projected-secrets-3265f85a-042a-11e9-b141-0a58ac1c1472 container projected-secret-volume-test: +STEP: delete the pod +Dec 20 07:38:13.275: INFO: Waiting for pod pod-projected-secrets-3265f85a-042a-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:38:13.279: INFO: Pod pod-projected-secrets-3265f85a-042a-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:38:13.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-f4mhj" for this suite. +Dec 20 07:38:19.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:38:19.320: INFO: namespace: e2e-tests-projected-f4mhj, resource: bindings, ignored listing per whitelist +Dec 20 07:38:19.442: INFO: namespace e2e-tests-projected-f4mhj deletion completed in 6.158137419s + +• [SLOW TEST:10.377 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 + should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SS +------------------------------ +[k8s.io] Pods + should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:38:19.442: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 +[It] should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +Dec 20 07:38:19.600: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: creating the pod +STEP: submitting the pod to kubernetes +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:38:23.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-pods-qth88" for this suite. +Dec 20 07:39:13.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:39:13.865: INFO: namespace: e2e-tests-pods-qth88, resource: bindings, ignored listing per whitelist +Dec 20 07:39:13.937: INFO: namespace e2e-tests-pods-qth88 deletion completed in 50.147490955s + +• [SLOW TEST:54.495 seconds] +[k8s.io] Pods +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,default) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:39:13.937: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,default) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test emptydir 0644 on node default medium +Dec 20 07:39:14.053: INFO: Waiting up to 5m0s for pod "pod-59094bb7-042a-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-emptydir-68s8s" to be "success or failure" +Dec 20 07:39:14.057: INFO: Pod "pod-59094bb7-042a-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.236919ms +Dec 20 07:39:16.062: INFO: Pod "pod-59094bb7-042a-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00885623s +Dec 20 07:39:18.069: INFO: Pod "pod-59094bb7-042a-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015491464s +STEP: Saw pod success +Dec 20 07:39:18.069: INFO: Pod "pod-59094bb7-042a-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:39:18.072: INFO: Trying to get logs from node 10-6-155-34 pod pod-59094bb7-042a-11e9-b141-0a58ac1c1472 container test-container: +STEP: delete the pod +Dec 20 07:39:18.104: INFO: Waiting for pod pod-59094bb7-042a-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:39:18.111: INFO: Pod pod-59094bb7-042a-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:39:18.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-emptydir-68s8s" for this suite. +Dec 20 07:39:24.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:39:24.184: INFO: namespace: e2e-tests-emptydir-68s8s, resource: bindings, ignored listing per whitelist +Dec 20 07:39:24.266: INFO: namespace e2e-tests-emptydir-68s8s deletion completed in 6.14900786s + +• [SLOW TEST:10.329 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 + should support (root,0644,default) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:39:24.266: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Dec 20 07:39:24.386: INFO: Waiting up to 5m0s for pod "pod-5f323a74-042a-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-emptydir-mr2mr" to be "success or failure" +Dec 20 07:39:24.398: INFO: Pod "pod-5f323a74-042a-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 12.099484ms +Dec 20 07:39:26.409: INFO: Pod "pod-5f323a74-042a-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022673851s +Dec 20 07:39:28.413: INFO: Pod "pod-5f323a74-042a-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027374966s +STEP: Saw pod success +Dec 20 07:39:28.413: INFO: Pod "pod-5f323a74-042a-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:39:28.420: INFO: Trying to get logs from node 10-6-155-34 pod pod-5f323a74-042a-11e9-b141-0a58ac1c1472 container test-container: +STEP: delete the pod +Dec 20 07:39:28.439: INFO: Waiting for pod pod-5f323a74-042a-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:39:28.442: INFO: Pod pod-5f323a74-042a-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:39:28.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-emptydir-mr2mr" for this suite. +Dec 20 07:39:34.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:39:34.522: INFO: namespace: e2e-tests-emptydir-mr2mr, resource: bindings, ignored listing per whitelist +Dec 20 07:39:34.591: INFO: namespace e2e-tests-emptydir-mr2mr deletion completed in 6.138331987s + +• [SLOW TEST:10.325 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 + should support (root,0644,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-api-machinery] Secrets + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-api-machinery] Secrets + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:39:34.591: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating secret e2e-tests-secrets-jdr52/secret-test-655ee9b9-042a-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test consume secrets +Dec 20 07:39:34.749: INFO: Waiting up to 5m0s for pod "pod-configmaps-655f9e39-042a-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-secrets-jdr52" to be "success or failure" +Dec 20 07:39:34.756: INFO: Pod "pod-configmaps-655f9e39-042a-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 6.268737ms +Dec 20 07:39:36.763: INFO: Pod "pod-configmaps-655f9e39-042a-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013286361s +Dec 20 07:39:38.768: INFO: Pod "pod-configmaps-655f9e39-042a-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01869484s +STEP: Saw pod success +Dec 20 07:39:38.768: INFO: Pod "pod-configmaps-655f9e39-042a-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:39:38.771: INFO: Trying to get logs from node 10-6-155-34 pod pod-configmaps-655f9e39-042a-11e9-b141-0a58ac1c1472 container env-test: +STEP: delete the pod +Dec 20 07:39:38.794: INFO: Waiting for pod pod-configmaps-655f9e39-042a-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:39:38.801: INFO: Pod pod-configmaps-655f9e39-042a-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-api-machinery] Secrets + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:39:38.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-secrets-jdr52" for this suite. +Dec 20 07:39:44.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:39:44.934: INFO: namespace: e2e-tests-secrets-jdr52, resource: bindings, ignored listing per whitelist +Dec 20 07:39:44.947: INFO: namespace e2e-tests-secrets-jdr52 deletion completed in 6.137837935s + +• [SLOW TEST:10.355 seconds] +[sig-api-machinery] Secrets +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-node] Downward API + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-node] Downward API + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:39:44.947: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward api env vars +Dec 20 07:39:45.057: INFO: Waiting up to 5m0s for pod "downward-api-6b83bbde-042a-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-downward-api-dj78v" to be "success or failure" +Dec 20 07:39:45.061: INFO: Pod "downward-api-6b83bbde-042a-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274641ms +Dec 20 07:39:47.066: INFO: Pod "downward-api-6b83bbde-042a-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009026179s +Dec 20 07:39:49.071: INFO: Pod "downward-api-6b83bbde-042a-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014802044s +STEP: Saw pod success +Dec 20 07:39:49.071: INFO: Pod "downward-api-6b83bbde-042a-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:39:49.077: INFO: Trying to get logs from node 10-6-155-34 pod downward-api-6b83bbde-042a-11e9-b141-0a58ac1c1472 container dapi-container: +STEP: delete the pod +Dec 20 07:39:49.107: INFO: Waiting for pod downward-api-6b83bbde-042a-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:39:49.112: INFO: Pod downward-api-6b83bbde-042a-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:39:49.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-downward-api-dj78v" for this suite. +Dec 20 07:39:55.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:39:55.173: INFO: namespace: e2e-tests-downward-api-dj78v, resource: bindings, ignored listing per whitelist +Dec 20 07:39:55.253: INFO: namespace e2e-tests-downward-api-dj78v deletion completed in 6.13219639s + +• [SLOW TEST:10.306 seconds] +[sig-node] Downward API +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:39:55.253: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating configMap with name configmap-test-volume-71ac9fb4-042a-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test consume configMaps +Dec 20 07:39:55.399: INFO: Waiting up to 5m0s for pod "pod-configmaps-71ae2311-042a-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-configmap-f2bc2" to be "success or failure" +Dec 20 07:39:55.415: INFO: Pod "pod-configmaps-71ae2311-042a-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 15.803699ms +Dec 20 07:39:57.420: INFO: Pod "pod-configmaps-71ae2311-042a-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02074458s +Dec 20 07:39:59.427: INFO: Pod "pod-configmaps-71ae2311-042a-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028418784s +STEP: Saw pod success +Dec 20 07:39:59.427: INFO: Pod "pod-configmaps-71ae2311-042a-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:39:59.433: INFO: Trying to get logs from node 10-6-155-34 pod pod-configmaps-71ae2311-042a-11e9-b141-0a58ac1c1472 container configmap-volume-test: +STEP: delete the pod +Dec 20 07:39:59.457: INFO: Waiting for pod pod-configmaps-71ae2311-042a-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:39:59.461: INFO: Pod pod-configmaps-71ae2311-042a-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:39:59.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-configmap-f2bc2" for this suite. +Dec 20 07:40:05.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:40:05.595: INFO: namespace: e2e-tests-configmap-f2bc2, resource: bindings, ignored listing per whitelist +Dec 20 07:40:05.658: INFO: namespace e2e-tests-configmap-f2bc2 deletion completed in 6.18551872s + +• [SLOW TEST:10.405 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-network] Networking + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:40:05.658: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-bsbdk +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Dec 20 07:40:05.805: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +STEP: Creating test pods +Dec 20 07:40:29.903: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.28.20.96:8080/dial?request=hostName&protocol=http&host=172.28.240.99&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-bsbdk PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 20 07:40:29.906: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +Dec 20 07:40:30.384: INFO: Waiting for endpoints: map[] +Dec 20 07:40:30.388: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.28.20.96:8080/dial?request=hostName&protocol=http&host=172.28.20.90&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-bsbdk PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 20 07:40:30.388: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +Dec 20 07:40:30.487: INFO: Waiting for endpoints: map[] +[AfterEach] [sig-network] Networking + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:40:30.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-pod-network-test-bsbdk" for this suite. +Dec 20 07:40:52.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:40:52.588: INFO: namespace: e2e-tests-pod-network-test-bsbdk, resource: bindings, ignored listing per whitelist +Dec 20 07:40:52.651: INFO: namespace e2e-tests-pod-network-test-bsbdk deletion completed in 22.157820815s + +• [SLOW TEST:46.994 seconds] +[sig-network] Networking +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 + Granular Checks: Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 + should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:40:52.652: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating configMap with name projected-configmap-test-volume-93e455cd-042a-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test consume configMaps +Dec 20 07:40:52.806: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-93e56504-042a-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-98jr9" to be "success or failure" +Dec 20 07:40:52.817: INFO: Pod "pod-projected-configmaps-93e56504-042a-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 10.703009ms +Dec 20 07:40:54.822: INFO: Pod "pod-projected-configmaps-93e56504-042a-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015682257s +Dec 20 07:40:56.826: INFO: Pod "pod-projected-configmaps-93e56504-042a-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019623673s +STEP: Saw pod success +Dec 20 07:40:56.826: INFO: Pod "pod-projected-configmaps-93e56504-042a-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:40:56.830: INFO: Trying to get logs from node 10-6-155-34 pod pod-projected-configmaps-93e56504-042a-11e9-b141-0a58ac1c1472 container projected-configmap-volume-test: +STEP: delete the pod +Dec 20 07:40:56.853: INFO: Waiting for pod pod-projected-configmaps-93e56504-042a-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:40:56.857: INFO: Pod pod-projected-configmaps-93e56504-042a-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:40:56.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-98jr9" for this suite. +Dec 20 07:41:02.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:41:02.942: INFO: namespace: e2e-tests-projected-98jr9, resource: bindings, ignored listing per whitelist +Dec 20 07:41:03.016: INFO: namespace e2e-tests-projected-98jr9 deletion completed in 6.151760875s + +• [SLOW TEST:10.365 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with secret pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Subpath + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:41:03.017: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with secret pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating pod pod-subpath-test-secret-splj +STEP: Creating a pod to test atomic-volume-subpath +Dec 20 07:41:03.145: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-splj" in namespace "e2e-tests-subpath-k7pzk" to be "success or failure" +Dec 20 07:41:03.149: INFO: Pod "pod-subpath-test-secret-splj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.255719ms +Dec 20 07:41:05.157: INFO: Pod "pod-subpath-test-secret-splj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012161286s +Dec 20 07:41:15.182: INFO: Pod "pod-subpath-test-secret-splj": Phase="Running", Reason="", readiness=false. Elapsed: 12.036536486s +Dec 20 07:41:17.186: INFO: Pod "pod-subpath-test-secret-splj": Phase="Running", Reason="", readiness=false. Elapsed: 14.040782948s +Dec 20 07:41:19.190: INFO: Pod "pod-subpath-test-secret-splj": Phase="Running", Reason="", readiness=false. Elapsed: 16.04450097s +Dec 20 07:41:29.219: INFO: Pod "pod-subpath-test-secret-splj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.074170825s +STEP: Saw pod success +Dec 20 07:41:29.219: INFO: Pod "pod-subpath-test-secret-splj" satisfied condition "success or failure" +Dec 20 07:41:29.224: INFO: Trying to get logs from node 10-6-155-34 pod pod-subpath-test-secret-splj container test-container-subpath-secret-splj: +STEP: delete the pod +Dec 20 07:41:29.249: INFO: Waiting for pod pod-subpath-test-secret-splj to disappear +Dec 20 07:41:29.253: INFO: Pod pod-subpath-test-secret-splj no longer exists +STEP: Deleting pod pod-subpath-test-secret-splj +Dec 20 07:41:29.253: INFO: Deleting pod "pod-subpath-test-secret-splj" in namespace "e2e-tests-subpath-k7pzk" +[AfterEach] [sig-storage] Subpath + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:41:29.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-subpath-k7pzk" for this suite. +Dec 20 07:41:35.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:41:35.353: INFO: namespace: e2e-tests-subpath-k7pzk, resource: bindings, ignored listing per whitelist +Dec 20 07:41:35.398: INFO: namespace e2e-tests-subpath-k7pzk deletion completed in 6.133572695s + +• [SLOW TEST:32.381 seconds] +[sig-storage] Subpath +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + Atomic writer volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with secret pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if not matching [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:41:35.398: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 +Dec 20 07:41:35.490: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Dec 20 07:41:35.502: INFO: Waiting for terminating namespaces to be deleted... +Dec 20 07:41:35.506: INFO: +Logging pods the kubelet thinks is on node 10-6-155-33 before test +Dec 20 07:41:35.527: INFO: coredns-87987d698-55xbs from kube-system started at 2018-12-13 03:08:41 +0000 UTC (1 container statuses recorded) +Dec 20 07:41:35.528: INFO: Container coredns ready: true, restart count 1 +Dec 20 07:41:35.528: INFO: coredns-87987d698-4brj5 from kube-system started at 2018-12-17 03:35:16 +0000 UTC (1 container statuses recorded) +Dec 20 07:41:35.528: INFO: Container coredns ready: true, restart count 0 +Dec 20 07:41:35.528: INFO: calico-kube-controllers-5dd6c6f8bc-4xfk4 from kube-system started at 2018-12-17 03:35:16 +0000 UTC (1 container statuses recorded) +Dec 20 07:41:35.528: INFO: Container calico-kube-controllers ready: true, restart count 0 +Dec 20 07:41:35.528: INFO: wordpress-wordpress-97f5cbb67-6j958 from default started at 2018-12-17 03:35:16 +0000 UTC (1 container statuses recorded) +Dec 20 07:41:35.528: INFO: Container wordpress-wordpress ready: true, restart count 0 +Dec 20 07:41:35.528: INFO: calico-node-lbxlp from kube-system started at 2018-12-20 07:15:25 +0000 UTC (2 container statuses recorded) +Dec 20 07:41:35.528: INFO: Container calico-node ready: true, restart count 0 +Dec 20 07:41:35.528: INFO: Container install-cni ready: true, restart count 0 +Dec 20 07:41:35.528: INFO: wordpress-wordpress-mysql-75d5f8f644-tbzfh from default started at 2018-12-13 03:19:52 +0000 UTC (1 container statuses recorded) +Dec 20 07:41:35.528: INFO: Container wordpress-mysql ready: true, restart count 1 +Dec 20 07:41:35.528: INFO: kube-proxy-84x26 from kube-system started at 2018-12-20 07:15:33 +0000 UTC (1 container statuses recorded) +Dec 20 07:41:35.528: INFO: Container kube-proxy ready: true, restart count 0 +Dec 20 07:41:35.528: INFO: d2048-2048-7b95b48c9b-n6hqw from default started at 2018-12-20 07:19:05 +0000 UTC (1 container statuses recorded) +Dec 20 07:41:35.528: INFO: Container d2048-2048 ready: true, restart count 0 +Dec 20 07:41:35.528: INFO: smokeping-sb4jz from kube-system started at 2018-12-13 03:01:41 +0000 UTC (1 container statuses recorded) +Dec 20 07:41:35.528: INFO: Container smokeping ready: true, restart count 5 +Dec 20 07:41:35.528: INFO: +Logging pods the kubelet thinks is on node 10-6-155-34 before test +Dec 20 07:41:35.540: INFO: kube-proxy-m94wf from kube-system started at 2018-12-20 07:15:39 +0000 UTC (1 container statuses recorded) +Dec 20 07:41:35.540: INFO: Container kube-proxy ready: true, restart count 0 +Dec 20 07:41:35.540: INFO: sonobuoy-e2e-job-b25697b233924eae from heptio-sonobuoy started at 2018-12-20 07:21:27 +0000 UTC (2 container statuses recorded) +Dec 20 07:41:35.540: INFO: Container e2e ready: true, restart count 0 +Dec 20 07:41:35.540: INFO: Container sonobuoy-worker ready: true, restart count 0 +Dec 20 07:41:35.540: INFO: calico-node-mz7bv from kube-system started at 2018-12-20 07:15:25 +0000 UTC (2 container statuses recorded) +Dec 20 07:41:35.540: INFO: Container calico-node ready: true, restart count 0 +Dec 20 07:41:35.540: INFO: Container install-cni ready: true, restart count 0 +Dec 20 07:41:35.540: INFO: sonobuoy from heptio-sonobuoy started at 2018-12-20 07:21:15 +0000 UTC (3 container statuses recorded) +Dec 20 07:41:35.540: INFO: Container cleanup ready: true, restart count 0 +Dec 20 07:41:35.540: INFO: Container forwarder ready: true, restart count 0 +Dec 20 07:41:35.540: INFO: Container kube-sonobuoy ready: true, restart count 0 +[It] validates that NodeSelector is respected if not matching [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Trying to schedule Pod with nonempty NodeSelector. +STEP: Considering event: +Type = [Warning], Name = [restricted-pod.1571fa9c1db9541e], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:41:36.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-sched-pred-s72th" for this suite. +Dec 20 07:41:42.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:41:42.696: INFO: namespace: e2e-tests-sched-pred-s72th, resource: bindings, ignored listing per whitelist +Dec 20 07:41:42.734: INFO: namespace e2e-tests-sched-pred-s72th deletion completed in 6.130287829s +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 + +• [SLOW TEST:7.335 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 + validates that NodeSelector is respected if not matching [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Update Demo + should create and stop a replication controller [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:41:42.734: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[BeforeEach] [k8s.io] Update Demo + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 +[It] should create and stop a replication controller [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating a replication controller +Dec 20 07:41:42.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 create -f - --namespace=e2e-tests-kubectl-5wv99' +Dec 20 07:41:43.086: INFO: stderr: "" +Dec 20 07:41:43.086: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Dec 20 07:41:43.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5wv99' +Dec 20 07:41:43.236: INFO: stderr: "" +Dec 20 07:41:43.236: INFO: stdout: "update-demo-nautilus-7ppgz update-demo-nautilus-tv6jh " +Dec 20 07:41:43.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-7ppgz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5wv99' +Dec 20 07:41:43.373: INFO: stderr: "" +Dec 20 07:41:43.373: INFO: stdout: "" +Dec 20 07:41:43.373: INFO: update-demo-nautilus-7ppgz is created but not running +Dec 20 07:41:48.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5wv99' +Dec 20 07:41:48.576: INFO: stderr: "" +Dec 20 07:41:48.576: INFO: stdout: "update-demo-nautilus-7ppgz update-demo-nautilus-tv6jh " +Dec 20 07:41:48.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-7ppgz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5wv99' +Dec 20 07:41:48.719: INFO: stderr: "" +Dec 20 07:41:48.719: INFO: stdout: "true" +Dec 20 07:41:48.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-7ppgz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5wv99' +Dec 20 07:41:48.908: INFO: stderr: "" +Dec 20 07:41:48.909: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Dec 20 07:41:48.909: INFO: validating pod update-demo-nautilus-7ppgz +Dec 20 07:41:48.924: INFO: got data: { + "image": "nautilus.jpg" +} + +Dec 20 07:41:48.924: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Dec 20 07:41:48.924: INFO: update-demo-nautilus-7ppgz is verified up and running +Dec 20 07:41:48.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-tv6jh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5wv99' +Dec 20 07:41:49.086: INFO: stderr: "" +Dec 20 07:41:49.086: INFO: stdout: "true" +Dec 20 07:41:49.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-tv6jh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5wv99' +Dec 20 07:41:49.243: INFO: stderr: "" +Dec 20 07:41:49.243: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Dec 20 07:41:49.243: INFO: validating pod update-demo-nautilus-tv6jh +Dec 20 07:41:49.254: INFO: got data: { + "image": "nautilus.jpg" +} + +Dec 20 07:41:49.254: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Dec 20 07:41:49.254: INFO: update-demo-nautilus-tv6jh is verified up and running +STEP: using delete to clean up resources +Dec 20 07:41:49.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-5wv99' +Dec 20 07:41:49.427: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Dec 20 07:41:49.427: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Dec 20 07:41:49.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-5wv99' +Dec 20 07:41:49.628: INFO: stderr: "No resources found.\n" +Dec 20 07:41:49.628: INFO: stdout: "" +Dec 20 07:41:49.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods -l name=update-demo --namespace=e2e-tests-kubectl-5wv99 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Dec 20 07:41:49.839: INFO: stderr: "" +Dec 20 07:41:49.839: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:41:49.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-5wv99" for this suite. +Dec 20 07:42:11.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:42:11.926: INFO: namespace: e2e-tests-kubectl-5wv99, resource: bindings, ignored listing per whitelist +Dec 20 07:42:11.974: INFO: namespace e2e-tests-kubectl-5wv99 deletion completed in 22.127844989s + +• [SLOW TEST:29.240 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Update Demo + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should create and stop a replication controller [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set mode on item file [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:42:11.974: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should set mode on item file [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward API volume plugin +Dec 20 07:42:12.073: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c3259108-042a-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-r4c4c" to be "success or failure" +Dec 20 07:42:16.100: INFO: Pod "downwardapi-volume-c3259108-042a-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026871981s +STEP: Saw pod success +Dec 20 07:42:16.100: INFO: Pod "downwardapi-volume-c3259108-042a-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:42:16.104: INFO: Trying to get logs from node 10-6-155-34 pod downwardapi-volume-c3259108-042a-11e9-b141-0a58ac1c1472 container client-container: +STEP: delete the pod +Dec 20 07:42:16.133: INFO: Waiting for pod downwardapi-volume-c3259108-042a-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:42:16.136: INFO: Pod downwardapi-volume-c3259108-042a-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:42:16.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-r4c4c" for this suite. +Dec 20 07:42:22.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:42:22.206: INFO: namespace: e2e-tests-projected-r4c4c, resource: bindings, ignored listing per whitelist +Dec 20 07:42:22.276: INFO: namespace e2e-tests-projected-r4c4c deletion completed in 6.130564011s + +• [SLOW TEST:10.302 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should set mode on item file [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:42:22.277: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating configMap with name projected-configmap-test-volume-map-c94dddcb-042a-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test consume configMaps +Dec 20 07:42:22.415: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c94e7e06-042a-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-m8x5m" to be "success or failure" +Dec 20 07:42:22.420: INFO: Pod "pod-projected-configmaps-c94e7e06-042a-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 5.611478ms +Dec 20 07:42:24.426: INFO: Pod "pod-projected-configmaps-c94e7e06-042a-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011688069s +Dec 20 07:42:26.433: INFO: Pod "pod-projected-configmaps-c94e7e06-042a-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01814898s +STEP: Saw pod success +Dec 20 07:42:26.433: INFO: Pod "pod-projected-configmaps-c94e7e06-042a-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:42:26.436: INFO: Trying to get logs from node 10-6-155-34 pod pod-projected-configmaps-c94e7e06-042a-11e9-b141-0a58ac1c1472 container projected-configmap-volume-test: +STEP: delete the pod +Dec 20 07:42:26.489: INFO: Waiting for pod pod-projected-configmaps-c94e7e06-042a-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:42:26.496: INFO: Pod pod-projected-configmaps-c94e7e06-042a-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:42:26.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-m8x5m" for this suite. +Dec 20 07:42:32.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:42:32.533: INFO: namespace: e2e-tests-projected-m8x5m, resource: bindings, ignored listing per whitelist +Dec 20 07:42:32.638: INFO: namespace e2e-tests-projected-m8x5m deletion completed in 6.134998938s + +• [SLOW TEST:10.361 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSS +------------------------------ +[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases + should write entries to /etc/hosts [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:42:32.638: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 +[It] should write entries to /etc/hosts [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[AfterEach] [k8s.io] Kubelet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:42:36.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubelet-test-kjzn5" for this suite. +Dec 20 07:43:22.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:43:22.850: INFO: namespace: e2e-tests-kubelet-test-kjzn5, resource: bindings, ignored listing per whitelist +Dec 20 07:43:22.886: INFO: namespace e2e-tests-kubelet-test-kjzn5 deletion completed in 46.123321072s + +• [SLOW TEST:50.249 seconds] +[k8s.io] Kubelet +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + when scheduling a busybox Pod with hostAliases + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 + should write entries to /etc/hosts [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:43:22.887: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs +STEP: Gathering metrics +W1220 07:43:53.046341 17 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. +Dec 20 07:43:53.046: INFO: For apiserver_request_count: +For apiserver_request_latencies_summary: +For etcd_helper_cache_entry_count: +For etcd_helper_cache_hit_count: +For etcd_helper_cache_miss_count: +For etcd_request_cache_add_latencies_summary: +For etcd_request_cache_get_latencies_summary: +For etcd_request_latencies_summary: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:43:53.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-gc-wftv9" for this suite. +Dec 20 07:43:59.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:43:59.197: INFO: namespace: e2e-tests-gc-wftv9, resource: bindings, ignored listing per whitelist +Dec 20 07:43:59.220: INFO: namespace e2e-tests-gc-wftv9 deletion completed in 6.167751452s + +• [SLOW TEST:36.333 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSS +------------------------------ +[sig-api-machinery] Watchers + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:43:59.220: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating a watch on configmaps with a certain label +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: changing the label value of the configmap +STEP: Expecting to observe a delete notification for the watched object +Dec 20 07:43:59.382: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-qz7fk,SelfLink:/api/v1/namespaces/e2e-tests-watch-qz7fk/configmaps/e2e-watch-test-label-changed,UID:0318a5c3-042b-11e9-b07b-0242ac120004,ResourceVersion:953579,Generation:0,CreationTimestamp:2018-12-20 07:43:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} +Dec 20 07:43:59.382: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-qz7fk,SelfLink:/api/v1/namespaces/e2e-tests-watch-qz7fk/configmaps/e2e-watch-test-label-changed,UID:0318a5c3-042b-11e9-b07b-0242ac120004,ResourceVersion:953580,Generation:0,CreationTimestamp:2018-12-20 07:43:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +Dec 20 07:43:59.382: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-qz7fk,SelfLink:/api/v1/namespaces/e2e-tests-watch-qz7fk/configmaps/e2e-watch-test-label-changed,UID:0318a5c3-042b-11e9-b07b-0242ac120004,ResourceVersion:953581,Generation:0,CreationTimestamp:2018-12-20 07:43:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +STEP: modifying the configmap a second time +STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements +STEP: changing the label value of the configmap back +STEP: modifying the configmap a third time +STEP: deleting the configmap +STEP: Expecting to observe an add notification for the watched object when the label value was restored +Dec 20 07:44:09.410: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-qz7fk,SelfLink:/api/v1/namespaces/e2e-tests-watch-qz7fk/configmaps/e2e-watch-test-label-changed,UID:0318a5c3-042b-11e9-b07b-0242ac120004,ResourceVersion:953598,Generation:0,CreationTimestamp:2018-12-20 07:43:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Dec 20 07:44:09.411: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-qz7fk,SelfLink:/api/v1/namespaces/e2e-tests-watch-qz7fk/configmaps/e2e-watch-test-label-changed,UID:0318a5c3-042b-11e9-b07b-0242ac120004,ResourceVersion:953599,Generation:0,CreationTimestamp:2018-12-20 07:43:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} +Dec 20 07:44:09.411: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-qz7fk,SelfLink:/api/v1/namespaces/e2e-tests-watch-qz7fk/configmaps/e2e-watch-test-label-changed,UID:0318a5c3-042b-11e9-b07b-0242ac120004,ResourceVersion:953600,Generation:0,CreationTimestamp:2018-12-20 07:43:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} +[AfterEach] [sig-api-machinery] Watchers + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:44:09.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-watch-qz7fk" for this suite. +Dec 20 07:44:15.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:44:15.518: INFO: namespace: e2e-tests-watch-qz7fk, resource: bindings, ignored listing per whitelist +Dec 20 07:44:15.553: INFO: namespace e2e-tests-watch-qz7fk deletion completed in 6.137557922s + +• [SLOW TEST:16.333 seconds] +[sig-api-machinery] Watchers +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl run pod + should create a pod from an image when restart is Never [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:44:15.553: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[BeforeEach] [k8s.io] Kubectl run pod + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 +[It] should create a pod from an image when restart is Never [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: running the image docker.io/library/nginx:1.14-alpine +Dec 20 07:44:15.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-7nqxm' +Dec 20 07:44:15.795: INFO: stderr: "" +Dec 20 07:44:15.795: INFO: stdout: "pod/e2e-test-nginx-pod created\n" +STEP: verifying the pod e2e-test-nginx-pod was created +[AfterEach] [k8s.io] Kubectl run pod + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 +Dec 20 07:44:15.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-7nqxm' +Dec 20 07:44:19.091: INFO: stderr: "" +Dec 20 07:44:19.091: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:44:19.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-7nqxm" for this suite. +Dec 20 07:44:25.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:44:25.251: INFO: namespace: e2e-tests-kubectl-7nqxm, resource: bindings, ignored listing per whitelist +Dec 20 07:44:25.253: INFO: namespace e2e-tests-kubectl-7nqxm deletion completed in 6.150569081s + +• [SLOW TEST:9.700 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Kubectl run pod + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should create a pod from an image when restart is Never [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-storage] Downward API volume + should provide podname only [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:44:25.253: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward API volume plugin +Dec 20 07:44:25.385: INFO: Waiting up to 5m0s for pod "downwardapi-volume-129a7c1b-042b-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-downward-api-6rtqd" to be "success or failure" +Dec 20 07:44:25.393: INFO: Pod "downwardapi-volume-129a7c1b-042b-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 8.635845ms +Dec 20 07:44:27.398: INFO: Pod "downwardapi-volume-129a7c1b-042b-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012794906s +Dec 20 07:44:29.403: INFO: Pod "downwardapi-volume-129a7c1b-042b-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018672624s +STEP: Saw pod success +Dec 20 07:44:29.404: INFO: Pod "downwardapi-volume-129a7c1b-042b-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:44:29.407: INFO: Trying to get logs from node 10-6-155-34 pod downwardapi-volume-129a7c1b-042b-11e9-b141-0a58ac1c1472 container client-container: +STEP: delete the pod +Dec 20 07:44:29.431: INFO: Waiting for pod downwardapi-volume-129a7c1b-042b-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:44:29.439: INFO: Pod downwardapi-volume-129a7c1b-042b-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:44:29.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-downward-api-6rtqd" for this suite. +Dec 20 07:44:35.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:44:35.515: INFO: namespace: e2e-tests-downward-api-6rtqd, resource: bindings, ignored listing per whitelist +Dec 20 07:44:35.558: INFO: namespace e2e-tests-downward-api-6rtqd deletion completed in 6.114226516s + +• [SLOW TEST:10.305 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide podname only [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:44:35.559: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward API volume plugin +Dec 20 07:44:35.668: INFO: Waiting up to 5m0s for pod "downwardapi-volume-18bbe2d8-042b-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-ns5fw" to be "success or failure" +Dec 20 07:44:35.680: INFO: Pod "downwardapi-volume-18bbe2d8-042b-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 12.001454ms +Dec 20 07:44:37.686: INFO: Pod "downwardapi-volume-18bbe2d8-042b-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018012394s +Dec 20 07:44:39.698: INFO: Pod "downwardapi-volume-18bbe2d8-042b-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029803841s +STEP: Saw pod success +Dec 20 07:44:39.698: INFO: Pod "downwardapi-volume-18bbe2d8-042b-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:44:39.703: INFO: Trying to get logs from node 10-6-155-34 pod downwardapi-volume-18bbe2d8-042b-11e9-b141-0a58ac1c1472 container client-container: +STEP: delete the pod +Dec 20 07:44:39.778: INFO: Waiting for pod downwardapi-volume-18bbe2d8-042b-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:44:39.782: INFO: Pod downwardapi-volume-18bbe2d8-042b-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:44:39.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-ns5fw" for this suite. +Dec 20 07:44:45.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:44:45.908: INFO: namespace: e2e-tests-projected-ns5fw, resource: bindings, ignored listing per whitelist +Dec 20 07:44:45.912: INFO: namespace e2e-tests-projected-ns5fw deletion completed in 6.121317291s + +• [SLOW TEST:10.354 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSS +------------------------------ +[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:44:45.912: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 +STEP: create the container to handle the HTTPGet hook request. +[It] should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: create the pod with lifecycle hook +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Dec 20 07:44:54.079: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Dec 20 07:44:54.087: INFO: Pod pod-with-poststart-exec-hook still exists +Dec 20 07:44:56.087: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Dec 20 07:44:56.092: INFO: Pod pod-with-poststart-exec-hook still exists +Dec 20 07:44:58.087: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Dec 20 07:45:16.087: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Dec 20 07:45:16.095: INFO: Pod pod-with-poststart-exec-hook still exists +Dec 20 07:45:18.087: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Dec 20 07:45:18.091: INFO: Pod pod-with-poststart-exec-hook still exists +Dec 20 07:45:20.087: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Dec 20 07:45:20.091: INFO: Pod pod-with-poststart-exec-hook no longer exists +[AfterEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:45:20.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-gkprf" for this suite. +Dec 20 07:45:42.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:45:42.171: INFO: namespace: e2e-tests-container-lifecycle-hook-gkprf, resource: bindings, ignored listing per whitelist +Dec 20 07:45:42.225: INFO: namespace e2e-tests-container-lifecycle-hook-gkprf deletion completed in 22.127746784s + +• [SLOW TEST:56.312 seconds] +[k8s.io] Container Lifecycle Hook +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + when create a pod with lifecycle hook + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 + should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:45:42.225: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 +[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Dec 20 07:45:46.861: INFO: Successfully updated pod "pod-update-activedeadlineseconds-40785b89-042b-11e9-b141-0a58ac1c1472" +Dec 20 07:45:46.861: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-40785b89-042b-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-pods-bjk5x" to be "terminated due to deadline exceeded" +Dec 20 07:45:46.865: INFO: Pod "pod-update-activedeadlineseconds-40785b89-042b-11e9-b141-0a58ac1c1472": Phase="Running", Reason="", readiness=true. Elapsed: 3.403283ms +Dec 20 07:45:48.871: INFO: Pod "pod-update-activedeadlineseconds-40785b89-042b-11e9-b141-0a58ac1c1472": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.00970122s +Dec 20 07:45:48.871: INFO: Pod "pod-update-activedeadlineseconds-40785b89-042b-11e9-b141-0a58ac1c1472" satisfied condition "terminated due to deadline exceeded" +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:45:48.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-pods-bjk5x" for this suite. +Dec 20 07:45:54.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:45:54.996: INFO: namespace: e2e-tests-pods-bjk5x, resource: bindings, ignored listing per whitelist +Dec 20 07:45:55.029: INFO: namespace e2e-tests-pods-bjk5x deletion completed in 6.146029397s + +• [SLOW TEST:12.804 seconds] +[k8s.io] Pods +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl replace + should update a single-container pod's image [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:45:55.029: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[BeforeEach] [k8s.io] Kubectl replace + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 +[It] should update a single-container pod's image [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: running the image docker.io/library/nginx:1.14-alpine +Dec 20 07:45:55.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-46524' +Dec 20 07:45:55.553: INFO: stderr: "" +Dec 20 07:45:55.553: INFO: stdout: "pod/e2e-test-nginx-pod created\n" +STEP: verifying the pod e2e-test-nginx-pod is running +STEP: verifying the pod e2e-test-nginx-pod was created +Dec 20 07:46:00.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-46524 -o json' +Dec 20 07:46:00.777: INFO: stderr: "" +Dec 20 07:46:00.777: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2018-12-20T07:45:55Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-46524\",\n \"resourceVersion\": \"953949\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-46524/pods/e2e-test-nginx-pod\",\n \"uid\": \"4855c11c-042b-11e9-b07b-0242ac120004\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-sgxtp\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"10-6-155-34\",\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-sgxtp\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-sgxtp\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2018-12-20T07:45:55Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2018-12-20T07:45:59Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2018-12-20T07:45:59Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2018-12-20T07:45:55Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://b3b205bad946ae1fe6c384d31e60414beacb78f53d043607f12f1773f50353eb\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:2abeba7cab34eb197ff7363486a2aa590027388eafd8e740efae7aae1bed28b6\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2018-12-20T07:45:59Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.6.155.34\",\n \"phase\": \"Running\",\n \"podIP\": \"172.28.20.87\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2018-12-20T07:45:55Z\"\n }\n}\n" +STEP: replace the image in the pod +Dec 20 07:46:00.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 replace -f - --namespace=e2e-tests-kubectl-46524' +Dec 20 07:46:01.040: INFO: stderr: "" +Dec 20 07:46:01.040: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" +STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 +[AfterEach] [k8s.io] Kubectl replace + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 +Dec 20 07:46:01.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-46524' +Dec 20 07:46:09.079: INFO: stderr: "" +Dec 20 07:46:09.079: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:46:09.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-46524" for this suite. +Dec 20 07:46:15.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:46:15.224: INFO: namespace: e2e-tests-kubectl-46524, resource: bindings, ignored listing per whitelist +Dec 20 07:46:15.233: INFO: namespace e2e-tests-kubectl-46524 deletion completed in 6.146246773s + +• [SLOW TEST:20.204 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Kubectl replace + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should update a single-container pod's image [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +S +------------------------------ +[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod + should be possible to delete [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:46:15.233: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 +[It] should be possible to delete [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[AfterEach] [k8s.io] Kubelet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:46:15.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubelet-test-4xvl4" for this suite. +Dec 20 07:46:21.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:46:21.406: INFO: namespace: e2e-tests-kubelet-test-4xvl4, resource: bindings, ignored listing per whitelist +Dec 20 07:46:21.494: INFO: namespace e2e-tests-kubelet-test-4xvl4 deletion completed in 6.146924822s + +• [SLOW TEST:6.261 seconds] +[k8s.io] Kubelet +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + when scheduling a busybox command that always fails in a pod + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 + should be possible to delete [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:46:21.497: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +Dec 20 07:46:21.611: INFO: Creating ReplicaSet my-hostname-basic-57e31bbb-042b-11e9-b141-0a58ac1c1472 +Dec 20 07:46:21.620: INFO: Pod name my-hostname-basic-57e31bbb-042b-11e9-b141-0a58ac1c1472: Found 0 pods out of 1 +Dec 20 07:46:26.625: INFO: Pod name my-hostname-basic-57e31bbb-042b-11e9-b141-0a58ac1c1472: Found 1 pods out of 1 +Dec 20 07:46:26.625: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-57e31bbb-042b-11e9-b141-0a58ac1c1472" is running +Dec 20 07:46:26.628: INFO: Pod "my-hostname-basic-57e31bbb-042b-11e9-b141-0a58ac1c1472-xjfdn" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-12-20 07:46:21 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-12-20 07:46:24 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-12-20 07:46:24 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-12-20 07:46:21 +0000 UTC Reason: Message:}]) +Dec 20 07:46:26.628: INFO: Trying to dial the pod +Dec 20 07:46:31.643: INFO: Controller my-hostname-basic-57e31bbb-042b-11e9-b141-0a58ac1c1472: Got expected result from replica 1 [my-hostname-basic-57e31bbb-042b-11e9-b141-0a58ac1c1472-xjfdn]: "my-hostname-basic-57e31bbb-042b-11e9-b141-0a58ac1c1472-xjfdn", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicaSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:46:31.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-replicaset-dxmhp" for this suite. +Dec 20 07:46:37.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:46:37.720: INFO: namespace: e2e-tests-replicaset-dxmhp, resource: bindings, ignored listing per whitelist +Dec 20 07:46:37.773: INFO: namespace e2e-tests-replicaset-dxmhp deletion completed in 6.122136064s + +• [SLOW TEST:16.277 seconds] +[sig-apps] ReplicaSet +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop complex daemon [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:46:37.774: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 +[It] should run and stop complex daemon [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +Dec 20 07:46:37.951: INFO: Creating daemon "daemon-set" with a node selector +STEP: Initially, daemon pods should not be running on any nodes. +Dec 20 07:46:37.962: INFO: Number of nodes with available pods: 0 +Dec 20 07:46:37.962: INFO: Number of running nodes: 0, number of available pods: 0 +STEP: Change node label to blue, check that daemon pod is launched. +Dec 20 07:46:37.980: INFO: Number of nodes with available pods: 0 +Dec 20 07:46:37.980: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 07:46:38.985: INFO: Number of nodes with available pods: 0 +Dec 20 07:46:38.985: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 07:46:39.986: INFO: Number of nodes with available pods: 0 +Dec 20 07:46:39.986: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 07:46:40.985: INFO: Number of nodes with available pods: 0 +Dec 20 07:46:40.985: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 07:46:41.985: INFO: Number of nodes with available pods: 1 +Dec 20 07:46:41.985: INFO: Number of running nodes: 1, number of available pods: 1 +STEP: Update the node label to green, and wait for daemons to be unscheduled +Dec 20 07:46:42.005: INFO: Number of nodes with available pods: 1 +Dec 20 07:46:42.005: INFO: Number of running nodes: 0, number of available pods: 1 +Dec 20 07:46:43.010: INFO: Number of nodes with available pods: 0 +Dec 20 07:46:43.010: INFO: Number of running nodes: 0, number of available pods: 0 +STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate +Dec 20 07:46:43.026: INFO: Number of nodes with available pods: 0 +Dec 20 07:46:43.026: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 07:46:44.031: INFO: Number of nodes with available pods: 0 +Dec 20 07:46:44.031: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 07:46:45.031: INFO: Number of nodes with available pods: 0 +Dec 20 07:46:56.032: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 07:46:57.035: INFO: Number of nodes with available pods: 0 +Dec 20 07:46:57.035: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 07:46:58.031: INFO: Number of nodes with available pods: 0 +Dec 20 07:46:58.031: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 07:46:59.032: INFO: Number of nodes with available pods: 0 +Dec 20 07:46:59.032: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 07:47:00.031: INFO: Number of nodes with available pods: 0 +Dec 20 07:47:14.034: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 07:47:15.031: INFO: Number of nodes with available pods: 0 +Dec 20 07:47:15.031: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 07:47:16.031: INFO: Number of nodes with available pods: 0 +Dec 20 07:47:16.031: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 07:47:17.032: INFO: Number of nodes with available pods: 0 +Dec 20 07:47:17.032: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 07:47:18.031: INFO: Number of nodes with available pods: 0 +Dec 20 07:47:22.034: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 07:47:23.038: INFO: Number of nodes with available pods: 0 +Dec 20 07:47:23.038: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 07:47:24.032: INFO: Number of nodes with available pods: 0 +Dec 20 07:47:24.032: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 07:47:25.030: INFO: Number of nodes with available pods: 0 +Dec 20 07:47:25.030: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 07:47:26.034: INFO: Number of nodes with available pods: 1 +Dec 20 07:47:26.034: INFO: Number of running nodes: 1, number of available pods: 1 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-dm8bj, will wait for the garbage collector to delete the pods +Dec 20 07:47:26.102: INFO: Deleting DaemonSet.extensions daemon-set took: 7.353911ms +Dec 20 07:47:26.202: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.290982ms +Dec 20 07:48:03.209: INFO: Number of nodes with available pods: 0 +Dec 20 07:48:03.209: INFO: Number of running nodes: 0, number of available pods: 0 +Dec 20 07:48:03.223: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-dm8bj/daemonsets","resourceVersion":"954265"},"items":null} + +Dec 20 07:48:03.233: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-dm8bj/pods","resourceVersion":"954265"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:48:03.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-daemonsets-dm8bj" for this suite. +Dec 20 07:48:09.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:48:09.479: INFO: namespace: e2e-tests-daemonsets-dm8bj, resource: bindings, ignored listing per whitelist +Dec 20 07:48:09.492: INFO: namespace e2e-tests-daemonsets-dm8bj deletion completed in 6.199075802s + +• [SLOW TEST:91.719 seconds] +[sig-apps] Daemon set [Serial] +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should run and stop complex daemon [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl run deployment + should create a deployment from an image [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:48:09.493: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[BeforeEach] [k8s.io] Kubectl run deployment + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 +[It] should create a deployment from an image [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: running the image docker.io/library/nginx:1.14-alpine +Dec 20 07:48:09.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-2pc2c' +Dec 20 07:48:09.833: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" +Dec 20 07:48:09.834: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" +STEP: verifying the deployment e2e-test-nginx-deployment was created +STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created +[AfterEach] [k8s.io] Kubectl run deployment + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 +Dec 20 07:48:11.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-2pc2c' +Dec 20 07:48:12.040: INFO: stderr: "" +Dec 20 07:48:12.040: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:48:12.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-2pc2c" for this suite. +Dec 20 07:48:34.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:48:34.129: INFO: namespace: e2e-tests-kubectl-2pc2c, resource: bindings, ignored listing per whitelist +Dec 20 07:48:34.202: INFO: namespace e2e-tests-kubectl-2pc2c deletion completed in 22.145490327s + +• [SLOW TEST:24.710 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Kubectl run deployment + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should create a deployment from an image [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl expose + should create services for rc [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:48:34.202: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[It] should create services for rc [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating Redis RC +Dec 20 07:48:34.344: INFO: namespace e2e-tests-kubectl-2h8m8 +Dec 20 07:48:34.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 create -f - --namespace=e2e-tests-kubectl-2h8m8' +Dec 20 07:48:34.611: INFO: stderr: "" +Dec 20 07:48:34.611: INFO: stdout: "replicationcontroller/redis-master created\n" +STEP: Waiting for Redis master to start. +Dec 20 07:48:35.617: INFO: Selector matched 1 pods for map[app:redis] +Dec 20 07:48:35.617: INFO: Found 0 / 1 +Dec 20 07:48:36.616: INFO: Selector matched 1 pods for map[app:redis] +Dec 20 07:48:36.616: INFO: Found 0 / 1 +Dec 20 07:48:38.693: INFO: Selector matched 1 pods for map[app:redis] +Dec 20 07:48:38.694: INFO: Found 1 / 1 +Dec 20 07:48:38.694: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Dec 20 07:48:38.700: INFO: Selector matched 1 pods for map[app:redis] +Dec 20 07:48:38.700: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Dec 20 07:48:38.700: INFO: wait on redis-master startup in e2e-tests-kubectl-2h8m8 +Dec 20 07:48:38.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 logs redis-master-xvr9c redis-master --namespace=e2e-tests-kubectl-2h8m8' +Dec 20 07:48:38.856: INFO: stderr: "" +Dec 20 07:48:38.856: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 Dec 07:48:38.227 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Dec 07:48:38.227 # Server started, Redis version 3.2.12\n1:M 20 Dec 07:48:38.227 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Dec 07:48:38.227 * The server is now ready to accept connections on port 6379\n" +STEP: exposing RC +Dec 20 07:48:38.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-2h8m8' +Dec 20 07:48:39.052: INFO: stderr: "" +Dec 20 07:48:39.052: INFO: stdout: "service/rm2 exposed\n" +Dec 20 07:48:39.057: INFO: Service rm2 in namespace e2e-tests-kubectl-2h8m8 found. +STEP: exposing service +Dec 20 07:48:41.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-2h8m8' +Dec 20 07:48:41.225: INFO: stderr: "" +Dec 20 07:48:41.225: INFO: stdout: "service/rm3 exposed\n" +Dec 20 07:48:41.229: INFO: Service rm3 in namespace e2e-tests-kubectl-2h8m8 found. +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:48:43.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-2h8m8" for this suite. +Dec 20 07:49:05.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:49:05.291: INFO: namespace: e2e-tests-kubectl-2h8m8, resource: bindings, ignored listing per whitelist +Dec 20 07:49:05.403: INFO: namespace e2e-tests-kubectl-2h8m8 deletion completed in 22.159348166s + +• [SLOW TEST:31.200 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Kubectl expose + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should create services for rc [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:49:05.403: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward API volume plugin +Dec 20 07:49:05.558: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b99a0262-042b-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-downward-api-fvbrk" to be "success or failure" +Dec 20 07:49:05.561: INFO: Pod "downwardapi-volume-b99a0262-042b-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.675398ms +Dec 20 07:49:07.566: INFO: Pod "downwardapi-volume-b99a0262-042b-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007482359s +Dec 20 07:49:09.570: INFO: Pod "downwardapi-volume-b99a0262-042b-11e9-b141-0a58ac1c1472": Phase="Running", Reason="", readiness=true. Elapsed: 4.011847792s +Dec 20 07:49:11.577: INFO: Pod "downwardapi-volume-b99a0262-042b-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018491311s +STEP: Saw pod success +Dec 20 07:49:11.577: INFO: Pod "downwardapi-volume-b99a0262-042b-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:49:11.581: INFO: Trying to get logs from node 10-6-155-34 pod downwardapi-volume-b99a0262-042b-11e9-b141-0a58ac1c1472 container client-container: +STEP: delete the pod +Dec 20 07:49:11.606: INFO: Waiting for pod downwardapi-volume-b99a0262-042b-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:49:11.611: INFO: Pod downwardapi-volume-b99a0262-042b-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:49:11.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-downward-api-fvbrk" for this suite. +Dec 20 07:49:17.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:49:17.768: INFO: namespace: e2e-tests-downward-api-fvbrk, resource: bindings, ignored listing per whitelist +Dec 20 07:49:17.802: INFO: namespace e2e-tests-downward-api-fvbrk deletion completed in 6.18103469s + +• [SLOW TEST:12.399 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:49:17.802: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating projection with secret that has name projected-secret-test-map-c0f9736b-042b-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test consume secrets +Dec 20 07:49:17.931: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c0fa0a82-042b-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-mj9lj" to be "success or failure" +Dec 20 07:49:17.943: INFO: Pod "pod-projected-secrets-c0fa0a82-042b-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 12.895383ms +Dec 20 07:49:19.949: INFO: Pod "pod-projected-secrets-c0fa0a82-042b-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018406753s +Dec 20 07:49:21.954: INFO: Pod "pod-projected-secrets-c0fa0a82-042b-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023230976s +STEP: Saw pod success +Dec 20 07:49:21.954: INFO: Pod "pod-projected-secrets-c0fa0a82-042b-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:49:21.961: INFO: Trying to get logs from node 10-6-155-34 pod pod-projected-secrets-c0fa0a82-042b-11e9-b141-0a58ac1c1472 container projected-secret-volume-test: +STEP: delete the pod +Dec 20 07:49:21.986: INFO: Waiting for pod pod-projected-secrets-c0fa0a82-042b-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:49:21.990: INFO: Pod pod-projected-secrets-c0fa0a82-042b-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:49:21.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-mj9lj" for this suite. +Dec 20 07:49:28.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:49:28.056: INFO: namespace: e2e-tests-projected-mj9lj, resource: bindings, ignored listing per whitelist +Dec 20 07:49:28.150: INFO: namespace e2e-tests-projected-mj9lj deletion completed in 6.147215241s + +• [SLOW TEST:10.348 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:49:28.150: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating 50 configmaps +STEP: Creating RC which spawns configmap-volume pods +Dec 20 07:49:28.551: INFO: Pod name wrapped-volume-race-c74c9aff-042b-11e9-b141-0a58ac1c1472: Found 0 pods out of 5 +Dec 20 07:49:33.560: INFO: Pod name wrapped-volume-race-c74c9aff-042b-11e9-b141-0a58ac1c1472: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-c74c9aff-042b-11e9-b141-0a58ac1c1472 in namespace e2e-tests-emptydir-wrapper-9qqvl, will wait for the garbage collector to delete the pods +Dec 20 07:49:51.693: INFO: Deleting ReplicationController wrapped-volume-race-c74c9aff-042b-11e9-b141-0a58ac1c1472 took: 14.675294ms +Dec 20 07:49:51.794: INFO: Terminating ReplicationController wrapped-volume-race-c74c9aff-042b-11e9-b141-0a58ac1c1472 pods took: 100.277122ms +STEP: Creating RC which spawns configmap-volume pods +Dec 20 07:50:33.617: INFO: Pod name wrapped-volume-race-ee14a33c-042b-11e9-b141-0a58ac1c1472: Found 0 pods out of 5 +Dec 20 07:50:38.636: INFO: Pod name wrapped-volume-race-ee14a33c-042b-11e9-b141-0a58ac1c1472: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-ee14a33c-042b-11e9-b141-0a58ac1c1472 in namespace e2e-tests-emptydir-wrapper-9qqvl, will wait for the garbage collector to delete the pods +Dec 20 07:50:56.854: INFO: Deleting ReplicationController wrapped-volume-race-ee14a33c-042b-11e9-b141-0a58ac1c1472 took: 46.495815ms +Dec 20 07:50:56.955: INFO: Terminating ReplicationController wrapped-volume-race-ee14a33c-042b-11e9-b141-0a58ac1c1472 pods took: 100.212239ms +STEP: Creating RC which spawns configmap-volume pods +Dec 20 07:51:43.382: INFO: Pod name wrapped-volume-race-17a94be5-042c-11e9-b141-0a58ac1c1472: Found 0 pods out of 5 +Dec 20 07:51:48.394: INFO: Pod name wrapped-volume-race-17a94be5-042c-11e9-b141-0a58ac1c1472: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-17a94be5-042c-11e9-b141-0a58ac1c1472 in namespace e2e-tests-emptydir-wrapper-9qqvl, will wait for the garbage collector to delete the pods +Dec 20 07:52:04.536: INFO: Deleting ReplicationController wrapped-volume-race-17a94be5-042c-11e9-b141-0a58ac1c1472 took: 28.702539ms +Dec 20 07:52:04.636: INFO: Terminating ReplicationController wrapped-volume-race-17a94be5-042c-11e9-b141-0a58ac1c1472 pods took: 100.215037ms +STEP: Cleaning up the configMaps +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:52:44.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-emptydir-wrapper-9qqvl" for this suite. +Dec 20 07:52:52.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:52:52.308: INFO: namespace: e2e-tests-emptydir-wrapper-9qqvl, resource: bindings, ignored listing per whitelist +Dec 20 07:52:52.445: INFO: namespace e2e-tests-emptydir-wrapper-9qqvl deletion completed in 8.22880798s + +• [SLOW TEST:204.295 seconds] +[sig-storage] EmptyDir wrapper volumes +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:52:52.445: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Dec 20 07:52:52.601: INFO: Waiting up to 5m0s for pod "pod-40edf9b8-042c-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-emptydir-rxvz9" to be "success or failure" +Dec 20 07:52:52.607: INFO: Pod "pod-40edf9b8-042c-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 5.759941ms +Dec 20 07:52:54.612: INFO: Pod "pod-40edf9b8-042c-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011070267s +Dec 20 07:52:56.617: INFO: Pod "pod-40edf9b8-042c-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016468312s +STEP: Saw pod success +Dec 20 07:52:56.617: INFO: Pod "pod-40edf9b8-042c-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:52:56.625: INFO: Trying to get logs from node 10-6-155-34 pod pod-40edf9b8-042c-11e9-b141-0a58ac1c1472 container test-container: +STEP: delete the pod +Dec 20 07:52:56.667: INFO: Waiting for pod pod-40edf9b8-042c-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:52:56.677: INFO: Pod pod-40edf9b8-042c-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:52:56.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-emptydir-rxvz9" for this suite. +Dec 20 07:53:02.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:53:02.730: INFO: namespace: e2e-tests-emptydir-rxvz9, resource: bindings, ignored listing per whitelist +Dec 20 07:53:02.866: INFO: namespace e2e-tests-emptydir-rxvz9 deletion completed in 6.179673329s + +• [SLOW TEST:10.421 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 + should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:53:02.866: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating projection with secret that has name projected-secret-test-47243738-042c-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test consume secrets +Dec 20 07:53:03.028: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-472565d4-042c-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-4kt2b" to be "success or failure" +Dec 20 07:53:03.031: INFO: Pod "pod-projected-secrets-472565d4-042c-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.510409ms +Dec 20 07:53:05.046: INFO: Pod "pod-projected-secrets-472565d4-042c-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01806004s +Dec 20 07:53:07.052: INFO: Pod "pod-projected-secrets-472565d4-042c-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023687716s +STEP: Saw pod success +Dec 20 07:53:07.052: INFO: Pod "pod-projected-secrets-472565d4-042c-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:53:07.058: INFO: Trying to get logs from node 10-6-155-34 pod pod-projected-secrets-472565d4-042c-11e9-b141-0a58ac1c1472 container projected-secret-volume-test: +STEP: delete the pod +Dec 20 07:53:07.080: INFO: Waiting for pod pod-projected-secrets-472565d4-042c-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:53:07.083: INFO: Pod pod-projected-secrets-472565d4-042c-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:53:07.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-4kt2b" for this suite. +Dec 20 07:53:13.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:53:13.220: INFO: namespace: e2e-tests-projected-4kt2b, resource: bindings, ignored listing per whitelist +Dec 20 07:53:13.270: INFO: namespace e2e-tests-projected-4kt2b deletion completed in 6.183305866s + +• [SLOW TEST:10.404 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 + should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +S +------------------------------ +[k8s.io] Docker Containers + should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Docker Containers + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:53:13.271: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test override arguments +Dec 20 07:53:13.422: INFO: Waiting up to 5m0s for pod "client-containers-4d574117-042c-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-containers-mzhrg" to be "success or failure" +Dec 20 07:53:13.425: INFO: Pod "client-containers-4d574117-042c-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 3.255957ms +Dec 20 07:53:15.444: INFO: Pod "client-containers-4d574117-042c-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022104498s +Dec 20 07:53:17.452: INFO: Pod "client-containers-4d574117-042c-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030338509s +STEP: Saw pod success +Dec 20 07:53:17.452: INFO: Pod "client-containers-4d574117-042c-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:53:17.456: INFO: Trying to get logs from node 10-6-155-34 pod client-containers-4d574117-042c-11e9-b141-0a58ac1c1472 container test-container: +STEP: delete the pod +Dec 20 07:53:17.494: INFO: Waiting for pod client-containers-4d574117-042c-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:53:17.498: INFO: Pod client-containers-4d574117-042c-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:53:17.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-containers-mzhrg" for this suite. +Dec 20 07:53:23.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:53:23.571: INFO: namespace: e2e-tests-containers-mzhrg, resource: bindings, ignored listing per whitelist +Dec 20 07:53:23.652: INFO: namespace e2e-tests-containers-mzhrg deletion completed in 6.146727256s + +• [SLOW TEST:10.381 seconds] +[k8s.io] Docker Containers +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:53:23.653: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 +STEP: create the container to handle the HTTPGet hook request. +[It] should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: create the pod with lifecycle hook +STEP: delete the pod with lifecycle hook +Dec 20 07:53:31.832: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Dec 20 07:53:31.838: INFO: Pod pod-with-prestop-http-hook still exists +Dec 20 07:53:37.838: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Dec 20 07:53:37.861: INFO: Pod pod-with-prestop-http-hook still exists +Dec 20 07:53:39.838: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Dec 20 07:53:39.843: INFO: Pod pod-with-prestop-http-hook no longer exists +STEP: check prestop hook +[AfterEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:53:39.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-nn8xt" for this suite. +Dec 20 07:54:01.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:54:01.952: INFO: namespace: e2e-tests-container-lifecycle-hook-nn8xt, resource: bindings, ignored listing per whitelist +Dec 20 07:54:02.014: INFO: namespace e2e-tests-container-lifecycle-hook-nn8xt deletion completed in 22.150083746s + +• [SLOW TEST:38.361 seconds] +[k8s.io] Container Lifecycle Hook +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + when create a pod with lifecycle hook + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 + should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +S +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:54:02.014: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward API volume plugin +Dec 20 07:54:02.134: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6a5fe14a-042c-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-6bs2c" to be "success or failure" +Dec 20 07:54:02.140: INFO: Pod "downwardapi-volume-6a5fe14a-042c-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 5.878117ms +Dec 20 07:54:04.144: INFO: Pod "downwardapi-volume-6a5fe14a-042c-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010769948s +Dec 20 07:54:06.152: INFO: Pod "downwardapi-volume-6a5fe14a-042c-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018102115s +STEP: Saw pod success +Dec 20 07:54:06.152: INFO: Pod "downwardapi-volume-6a5fe14a-042c-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:54:06.163: INFO: Trying to get logs from node 10-6-155-34 pod downwardapi-volume-6a5fe14a-042c-11e9-b141-0a58ac1c1472 container client-container: +STEP: delete the pod +Dec 20 07:54:06.194: INFO: Waiting for pod downwardapi-volume-6a5fe14a-042c-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:54:06.202: INFO: Pod downwardapi-volume-6a5fe14a-042c-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:54:06.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-6bs2c" for this suite. +Dec 20 07:54:12.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:54:12.338: INFO: namespace: e2e-tests-projected-6bs2c, resource: bindings, ignored listing per whitelist +Dec 20 07:54:12.382: INFO: namespace e2e-tests-projected-6bs2c deletion completed in 6.17435929s + +• [SLOW TEST:10.368 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSS +------------------------------ +[k8s.io] Kubelet when scheduling a read only busybox container + should not write to root filesystem [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:54:12.382: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 +[It] should not write to root filesystem [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[AfterEach] [k8s.io] Kubelet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:54:16.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubelet-test-httfn" for this suite. +Dec 20 07:55:02.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:55:02.638: INFO: namespace: e2e-tests-kubelet-test-httfn, resource: bindings, ignored listing per whitelist +Dec 20 07:55:02.690: INFO: namespace e2e-tests-kubelet-test-httfn deletion completed in 46.159594014s + +• [SLOW TEST:50.308 seconds] +[k8s.io] Kubelet +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + when scheduling a read only busybox container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 + should not write to root filesystem [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan pods created by rc if delete options say so [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:55:02.690: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan pods created by rc if delete options say so [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods +STEP: Gathering metrics +W1220 07:55:42.866357 17 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. +Dec 20 07:55:42.866: INFO: For apiserver_request_count: +For apiserver_request_latencies_summary: +For etcd_helper_cache_entry_count: +For etcd_helper_cache_hit_count: +For etcd_helper_cache_miss_count: +For etcd_request_cache_add_latencies_summary: +For etcd_request_cache_get_latencies_summary: +For etcd_request_latencies_summary: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:55:42.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-gc-b6bsb" for this suite. +Dec 20 07:55:48.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:55:48.952: INFO: namespace: e2e-tests-gc-b6bsb, resource: bindings, ignored listing per whitelist +Dec 20 07:55:49.025: INFO: namespace e2e-tests-gc-b6bsb deletion completed in 6.14925011s + +• [SLOW TEST:46.335 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should orphan pods created by rc if delete options say so [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Service endpoints latency + should not be very high [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-network] Service endpoints latency + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:55:49.026: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename svc-latency +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be very high [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-chqxn +I1220 07:55:49.153180 17 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-chqxn, replica count: 1 +I1220 07:55:50.203549 17 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I1220 07:55:51.206261 17 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I1220 07:55:52.206564 17 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I1220 07:55:53.206790 17 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I1220 07:55:54.207690 17 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I1220 07:55:55.208045 17 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Dec 20 07:55:55.327: INFO: Created: latency-svc-bkjv8 +Dec 20 07:55:55.331: INFO: Got endpoints: latency-svc-bkjv8 [22.981192ms] +Dec 20 07:55:55.351: INFO: Created: latency-svc-jwj59 +Dec 20 07:55:55.355: INFO: Got endpoints: latency-svc-jwj59 [23.803704ms] +Dec 20 07:55:55.365: INFO: Created: latency-svc-dbb8p +Dec 20 07:55:55.366: INFO: Got endpoints: latency-svc-dbb8p [34.743276ms] +Dec 20 07:55:55.371: INFO: Created: latency-svc-m9fxv +Dec 20 07:55:55.376: INFO: Got endpoints: latency-svc-m9fxv [43.304438ms] +Dec 20 07:55:55.379: INFO: Created: latency-svc-6jdrh +Dec 20 07:55:55.384: INFO: Got endpoints: latency-svc-6jdrh [51.957229ms] +Dec 20 07:55:55.391: INFO: Created: latency-svc-z2hhf +Dec 20 07:55:55.416: INFO: Created: latency-svc-6svhb +Dec 20 07:55:55.416: INFO: Got endpoints: latency-svc-6svhb [31.857173ms] +Dec 20 07:55:55.416: INFO: Got endpoints: latency-svc-z2hhf [83.896756ms] +Dec 20 07:55:55.421: INFO: Created: latency-svc-6bkld +Dec 20 07:55:55.426: INFO: Got endpoints: latency-svc-6bkld [93.301204ms] +Dec 20 07:55:55.428: INFO: Created: latency-svc-2zs5l +Dec 20 07:55:55.433: INFO: Got endpoints: latency-svc-2zs5l [100.397452ms] +Dec 20 07:55:55.450: INFO: Created: latency-svc-c7lhw +Dec 20 07:55:55.450: INFO: Got endpoints: latency-svc-c7lhw [117.969625ms] +Dec 20 07:55:55.459: INFO: Created: latency-svc-mdrqx +Dec 20 07:55:55.463: INFO: Got endpoints: latency-svc-mdrqx [130.420872ms] +Dec 20 07:55:55.470: INFO: Created: latency-svc-98d6f +Dec 20 07:55:55.472: INFO: Got endpoints: latency-svc-98d6f [139.777972ms] +Dec 20 07:55:55.482: INFO: Created: latency-svc-dhkb8 +Dec 20 07:55:55.485: INFO: Got endpoints: latency-svc-dhkb8 [151.941084ms] +Dec 20 07:55:55.489: INFO: Created: latency-svc-jc4xc +Dec 20 07:55:55.493: INFO: Got endpoints: latency-svc-jc4xc [160.079647ms] +Dec 20 07:55:55.500: INFO: Created: latency-svc-kcrmt +Dec 20 07:55:55.508: INFO: Got endpoints: latency-svc-kcrmt [174.639813ms] +Dec 20 07:55:55.520: INFO: Created: latency-svc-bg6z4 +Dec 20 07:55:55.524: INFO: Got endpoints: latency-svc-bg6z4 [190.93129ms] +Dec 20 07:55:55.531: INFO: Created: latency-svc-76xsz +Dec 20 07:55:55.535: INFO: Got endpoints: latency-svc-76xsz [202.644295ms] +Dec 20 07:55:55.543: INFO: Created: latency-svc-fqgqt +Dec 20 07:55:55.546: INFO: Got endpoints: latency-svc-fqgqt [190.577579ms] +Dec 20 07:55:55.555: INFO: Created: latency-svc-qd4wk +Dec 20 07:55:55.561: INFO: Got endpoints: latency-svc-qd4wk [194.632468ms] +Dec 20 07:55:55.565: INFO: Created: latency-svc-xndwj +Dec 20 07:55:55.569: INFO: Got endpoints: latency-svc-xndwj [193.162609ms] +Dec 20 07:55:55.574: INFO: Created: latency-svc-c6wrj +Dec 20 07:55:55.580: INFO: Got endpoints: latency-svc-c6wrj [163.525081ms] +Dec 20 07:55:55.596: INFO: Created: latency-svc-bwnz4 +Dec 20 07:55:55.596: INFO: Got endpoints: latency-svc-bwnz4 [180.039531ms] +Dec 20 07:55:55.604: INFO: Created: latency-svc-kwstj +Dec 20 07:55:55.604: INFO: Got endpoints: latency-svc-kwstj [178.161694ms] +Dec 20 07:55:55.617: INFO: Created: latency-svc-wp5cj +Dec 20 07:55:55.617: INFO: Got endpoints: latency-svc-wp5cj [184.220286ms] +Dec 20 07:55:55.623: INFO: Created: latency-svc-g2z8v +Dec 20 07:55:55.630: INFO: Got endpoints: latency-svc-g2z8v [179.345768ms] +Dec 20 07:55:55.653: INFO: Created: latency-svc-642hx +Dec 20 07:55:55.653: INFO: Created: latency-svc-9kbm2 +Dec 20 07:55:55.653: INFO: Got endpoints: latency-svc-642hx [190.090204ms] +Dec 20 07:55:55.653: INFO: Got endpoints: latency-svc-9kbm2 [180.641571ms] +Dec 20 07:55:55.661: INFO: Created: latency-svc-jcz5f +Dec 20 07:55:55.665: INFO: Got endpoints: latency-svc-jcz5f [180.269977ms] +Dec 20 07:55:55.670: INFO: Created: latency-svc-4qw2c +Dec 20 07:55:55.676: INFO: Got endpoints: latency-svc-4qw2c [183.221462ms] +Dec 20 07:55:55.677: INFO: Created: latency-svc-g4tmz +Dec 20 07:55:55.682: INFO: Got endpoints: latency-svc-g4tmz [174.784164ms] +Dec 20 07:55:55.693: INFO: Created: latency-svc-r5cq2 +Dec 20 07:55:55.696: INFO: Got endpoints: latency-svc-r5cq2 [172.314478ms] +Dec 20 07:55:55.713: INFO: Created: latency-svc-wpf7t +Dec 20 07:55:55.716: INFO: Got endpoints: latency-svc-wpf7t [181.122521ms] +Dec 20 07:55:55.724: INFO: Created: latency-svc-8drtb +Dec 20 07:55:55.732: INFO: Got endpoints: latency-svc-8drtb [186.279229ms] +Dec 20 07:55:55.735: INFO: Created: latency-svc-cw5mk +Dec 20 07:55:55.741: INFO: Got endpoints: latency-svc-cw5mk [180.406575ms] +Dec 20 07:55:55.745: INFO: Created: latency-svc-tq6pg +Dec 20 07:55:55.745: INFO: Got endpoints: latency-svc-tq6pg [176.163206ms] +Dec 20 07:55:55.753: INFO: Created: latency-svc-pdsxd +Dec 20 07:55:55.756: INFO: Got endpoints: latency-svc-pdsxd [176.136553ms] +Dec 20 07:55:55.765: INFO: Created: latency-svc-fgfn9 +Dec 20 07:55:55.766: INFO: Got endpoints: latency-svc-fgfn9 [169.396485ms] +Dec 20 07:55:55.775: INFO: Created: latency-svc-zx4j2 +Dec 20 07:55:55.780: INFO: Got endpoints: latency-svc-zx4j2 [175.708541ms] +Dec 20 07:55:55.786: INFO: Created: latency-svc-8jhzt +Dec 20 07:55:55.793: INFO: Got endpoints: latency-svc-8jhzt [176.340136ms] +Dec 20 07:55:55.796: INFO: Created: latency-svc-bkl26 +Dec 20 07:55:55.805: INFO: Created: latency-svc-9ktv8 +Dec 20 07:55:55.814: INFO: Created: latency-svc-gflgx +Dec 20 07:55:55.823: INFO: Created: latency-svc-j94rj +Dec 20 07:55:55.833: INFO: Got endpoints: latency-svc-bkl26 [202.804975ms] +Dec 20 07:55:55.833: INFO: Created: latency-svc-9dgqj +Dec 20 07:55:55.840: INFO: Created: latency-svc-64vsv +Dec 20 07:55:55.849: INFO: Created: latency-svc-4x5fd +Dec 20 07:55:55.858: INFO: Created: latency-svc-bhxmj +Dec 20 07:55:55.877: INFO: Created: latency-svc-r9fmh +Dec 20 07:55:55.883: INFO: Got endpoints: latency-svc-9ktv8 [230.107974ms] +Dec 20 07:55:55.884: INFO: Created: latency-svc-hd9wl +Dec 20 07:55:55.895: INFO: Created: latency-svc-fwq72 +Dec 20 07:55:55.900: INFO: Created: latency-svc-pq5v9 +Dec 20 07:55:55.913: INFO: Created: latency-svc-7vk8g +Dec 20 07:55:55.923: INFO: Created: latency-svc-gvgsp +Dec 20 07:55:55.930: INFO: Got endpoints: latency-svc-gflgx [276.641791ms] +Dec 20 07:55:55.933: INFO: Created: latency-svc-rtjkl +Dec 20 07:55:55.946: INFO: Created: latency-svc-v2524 +Dec 20 07:55:55.956: INFO: Created: latency-svc-cvpsf +Dec 20 07:55:55.967: INFO: Created: latency-svc-89br6 +Dec 20 07:55:55.981: INFO: Got endpoints: latency-svc-j94rj [315.545295ms] +Dec 20 07:55:56.031: INFO: Created: latency-svc-cs7wr +Dec 20 07:55:56.031: INFO: Got endpoints: latency-svc-9dgqj [354.840874ms] +Dec 20 07:55:56.104: INFO: Created: latency-svc-7vpg5 +Dec 20 07:55:56.104: INFO: Got endpoints: latency-svc-64vsv [421.597517ms] +Dec 20 07:55:56.131: INFO: Got endpoints: latency-svc-4x5fd [435.02357ms] +Dec 20 07:55:56.131: INFO: Created: latency-svc-ptb6s +Dec 20 07:55:56.153: INFO: Created: latency-svc-rctxx +Dec 20 07:55:56.180: INFO: Got endpoints: latency-svc-bhxmj [463.836553ms] +Dec 20 07:55:56.202: INFO: Created: latency-svc-mv28s +Dec 20 07:55:56.231: INFO: Got endpoints: latency-svc-r9fmh [499.092014ms] +Dec 20 07:55:56.255: INFO: Created: latency-svc-mt9qp +Dec 20 07:55:56.292: INFO: Got endpoints: latency-svc-hd9wl [550.703118ms] +Dec 20 07:55:56.312: INFO: Created: latency-svc-9scbb +Dec 20 07:55:56.332: INFO: Got endpoints: latency-svc-fwq72 [586.847889ms] +Dec 20 07:55:56.347: INFO: Created: latency-svc-9n475 +Dec 20 07:55:56.382: INFO: Got endpoints: latency-svc-pq5v9 [626.295845ms] +Dec 20 07:55:56.410: INFO: Created: latency-svc-hr2rh +Dec 20 07:55:56.430: INFO: Got endpoints: latency-svc-7vk8g [664.73125ms] +Dec 20 07:55:56.444: INFO: Created: latency-svc-h6pwf +Dec 20 07:55:56.482: INFO: Got endpoints: latency-svc-gvgsp [702.037917ms] +Dec 20 07:55:56.499: INFO: Created: latency-svc-znlkx +Dec 20 07:55:56.535: INFO: Got endpoints: latency-svc-rtjkl [741.033983ms] +Dec 20 07:55:56.547: INFO: Created: latency-svc-dhj76 +Dec 20 07:55:56.583: INFO: Got endpoints: latency-svc-v2524 [749.80766ms] +Dec 20 07:55:56.600: INFO: Created: latency-svc-qht48 +Dec 20 07:55:56.631: INFO: Got endpoints: latency-svc-cvpsf [747.656291ms] +Dec 20 07:55:56.650: INFO: Created: latency-svc-bmk5k +Dec 20 07:55:56.681: INFO: Got endpoints: latency-svc-89br6 [751.530868ms] +Dec 20 07:55:56.706: INFO: Created: latency-svc-6ftpl +Dec 20 07:55:56.730: INFO: Got endpoints: latency-svc-cs7wr [749.462276ms] +Dec 20 07:55:56.745: INFO: Created: latency-svc-57xq8 +Dec 20 07:55:56.788: INFO: Got endpoints: latency-svc-7vpg5 [756.469957ms] +Dec 20 07:55:56.815: INFO: Created: latency-svc-q8rsl +Dec 20 07:55:56.846: INFO: Got endpoints: latency-svc-ptb6s [742.041353ms] +Dec 20 07:55:56.864: INFO: Created: latency-svc-mj9k8 +Dec 20 07:55:56.880: INFO: Got endpoints: latency-svc-rctxx [749.205545ms] +Dec 20 07:55:56.907: INFO: Created: latency-svc-d5tfj +Dec 20 07:55:56.932: INFO: Got endpoints: latency-svc-mv28s [752.389825ms] +Dec 20 07:55:56.945: INFO: Created: latency-svc-vfjn7 +Dec 20 07:55:56.982: INFO: Got endpoints: latency-svc-mt9qp [750.699036ms] +Dec 20 07:55:56.999: INFO: Created: latency-svc-7cp75 +Dec 20 07:55:57.036: INFO: Got endpoints: latency-svc-9scbb [743.69562ms] +Dec 20 07:55:57.065: INFO: Created: latency-svc-lxvbc +Dec 20 07:55:57.086: INFO: Got endpoints: latency-svc-9n475 [753.838778ms] +Dec 20 07:55:57.106: INFO: Created: latency-svc-z2rnn +Dec 20 07:55:57.130: INFO: Got endpoints: latency-svc-hr2rh [747.865114ms] +Dec 20 07:55:57.142: INFO: Created: latency-svc-cx7jb +Dec 20 07:55:57.180: INFO: Got endpoints: latency-svc-h6pwf [750.170806ms] +Dec 20 07:55:57.196: INFO: Created: latency-svc-bphck +Dec 20 07:55:57.239: INFO: Got endpoints: latency-svc-znlkx [757.472059ms] +Dec 20 07:55:57.258: INFO: Created: latency-svc-fn7zn +Dec 20 07:55:57.280: INFO: Got endpoints: latency-svc-dhj76 [745.387601ms] +Dec 20 07:55:57.293: INFO: Created: latency-svc-6g8mt +Dec 20 07:55:57.332: INFO: Got endpoints: latency-svc-qht48 [749.341081ms] +Dec 20 07:55:57.345: INFO: Created: latency-svc-sdbwr +Dec 20 07:55:57.381: INFO: Got endpoints: latency-svc-bmk5k [750.173599ms] +Dec 20 07:55:57.397: INFO: Created: latency-svc-lzb76 +Dec 20 07:55:57.431: INFO: Got endpoints: latency-svc-6ftpl [749.283325ms] +Dec 20 07:55:57.450: INFO: Created: latency-svc-4nkxx +Dec 20 07:55:57.480: INFO: Got endpoints: latency-svc-57xq8 [749.794078ms] +Dec 20 07:55:57.492: INFO: Created: latency-svc-hfghq +Dec 20 07:55:57.531: INFO: Got endpoints: latency-svc-q8rsl [743.498732ms] +Dec 20 07:55:57.556: INFO: Created: latency-svc-t45dz +Dec 20 07:55:57.587: INFO: Got endpoints: latency-svc-mj9k8 [740.989859ms] +Dec 20 07:55:57.601: INFO: Created: latency-svc-vxx9n +Dec 20 07:55:57.636: INFO: Got endpoints: latency-svc-d5tfj [755.6611ms] +Dec 20 07:55:57.654: INFO: Created: latency-svc-gxkc6 +Dec 20 07:55:57.682: INFO: Got endpoints: latency-svc-vfjn7 [750.228084ms] +Dec 20 07:55:57.696: INFO: Created: latency-svc-42492 +Dec 20 07:55:57.735: INFO: Got endpoints: latency-svc-7cp75 [753.202256ms] +Dec 20 07:55:57.758: INFO: Created: latency-svc-cwth9 +Dec 20 07:55:57.782: INFO: Got endpoints: latency-svc-lxvbc [746.528171ms] +Dec 20 07:55:57.798: INFO: Created: latency-svc-m9ccj +Dec 20 07:55:57.831: INFO: Got endpoints: latency-svc-z2rnn [744.849429ms] +Dec 20 07:55:57.843: INFO: Created: latency-svc-kvhwn +Dec 20 07:55:57.884: INFO: Got endpoints: latency-svc-cx7jb [753.337987ms] +Dec 20 07:55:57.901: INFO: Created: latency-svc-fvpxc +Dec 20 07:55:57.932: INFO: Got endpoints: latency-svc-bphck [751.547363ms] +Dec 20 07:55:57.948: INFO: Created: latency-svc-58tcx +Dec 20 07:55:57.982: INFO: Got endpoints: latency-svc-fn7zn [742.278651ms] +Dec 20 07:55:58.023: INFO: Created: latency-svc-wbdh8 +Dec 20 07:55:58.030: INFO: Got endpoints: latency-svc-6g8mt [749.66157ms] +Dec 20 07:55:58.048: INFO: Created: latency-svc-b77zf +Dec 20 07:55:58.083: INFO: Got endpoints: latency-svc-sdbwr [751.309484ms] +Dec 20 07:55:58.101: INFO: Created: latency-svc-tvdhn +Dec 20 07:55:58.129: INFO: Got endpoints: latency-svc-lzb76 [747.995088ms] +Dec 20 07:55:58.149: INFO: Created: latency-svc-7bkkb +Dec 20 07:55:58.182: INFO: Got endpoints: latency-svc-4nkxx [751.830498ms] +Dec 20 07:55:58.200: INFO: Created: latency-svc-9k8xr +Dec 20 07:55:58.232: INFO: Got endpoints: latency-svc-hfghq [752.061188ms] +Dec 20 07:55:58.253: INFO: Created: latency-svc-f7c5j +Dec 20 07:55:58.281: INFO: Got endpoints: latency-svc-t45dz [750.279525ms] +Dec 20 07:55:58.298: INFO: Created: latency-svc-r6bgp +Dec 20 07:55:58.330: INFO: Got endpoints: latency-svc-vxx9n [743.176944ms] +Dec 20 07:55:58.370: INFO: Created: latency-svc-fhvst +Dec 20 07:55:58.381: INFO: Got endpoints: latency-svc-gxkc6 [744.54894ms] +Dec 20 07:55:58.404: INFO: Created: latency-svc-dlr9s +Dec 20 07:55:58.432: INFO: Got endpoints: latency-svc-42492 [749.30096ms] +Dec 20 07:55:58.452: INFO: Created: latency-svc-t84rt +Dec 20 07:55:58.482: INFO: Got endpoints: latency-svc-cwth9 [746.541856ms] +Dec 20 07:55:58.494: INFO: Created: latency-svc-mpdhb +Dec 20 07:55:58.530: INFO: Got endpoints: latency-svc-m9ccj [748.000316ms] +Dec 20 07:55:58.545: INFO: Created: latency-svc-xgl9c +Dec 20 07:55:58.582: INFO: Got endpoints: latency-svc-kvhwn [751.723825ms] +Dec 20 07:55:58.606: INFO: Created: latency-svc-crnmq +Dec 20 07:55:58.631: INFO: Got endpoints: latency-svc-fvpxc [747.078085ms] +Dec 20 07:55:58.663: INFO: Created: latency-svc-fnt79 +Dec 20 07:55:58.681: INFO: Got endpoints: latency-svc-58tcx [749.345281ms] +Dec 20 07:55:58.705: INFO: Created: latency-svc-vhvl2 +Dec 20 07:55:58.733: INFO: Got endpoints: latency-svc-wbdh8 [751.130007ms] +Dec 20 07:55:58.752: INFO: Created: latency-svc-zgrb2 +Dec 20 07:55:58.784: INFO: Got endpoints: latency-svc-b77zf [754.339563ms] +Dec 20 07:55:58.801: INFO: Created: latency-svc-mc47w +Dec 20 07:55:58.830: INFO: Got endpoints: latency-svc-tvdhn [746.897644ms] +Dec 20 07:55:58.862: INFO: Created: latency-svc-8vqx7 +Dec 20 07:55:58.881: INFO: Got endpoints: latency-svc-7bkkb [751.206056ms] +Dec 20 07:55:58.896: INFO: Created: latency-svc-djxc5 +Dec 20 07:55:58.930: INFO: Got endpoints: latency-svc-9k8xr [747.753413ms] +Dec 20 07:55:58.945: INFO: Created: latency-svc-82wx7 +Dec 20 07:55:58.980: INFO: Got endpoints: latency-svc-f7c5j [747.900834ms] +Dec 20 07:55:58.994: INFO: Created: latency-svc-pph64 +Dec 20 07:55:59.030: INFO: Got endpoints: latency-svc-r6bgp [748.648472ms] +Dec 20 07:55:59.051: INFO: Created: latency-svc-7qgp5 +Dec 20 07:55:59.084: INFO: Got endpoints: latency-svc-fhvst [753.205414ms] +Dec 20 07:55:59.101: INFO: Created: latency-svc-s7r5f +Dec 20 07:55:59.136: INFO: Got endpoints: latency-svc-dlr9s [755.255594ms] +Dec 20 07:55:59.153: INFO: Created: latency-svc-6frwp +Dec 20 07:55:59.181: INFO: Got endpoints: latency-svc-t84rt [748.704435ms] +Dec 20 07:55:59.198: INFO: Created: latency-svc-hc8gf +Dec 20 07:55:59.233: INFO: Got endpoints: latency-svc-mpdhb [750.918498ms] +Dec 20 07:55:59.248: INFO: Created: latency-svc-sgpm6 +Dec 20 07:55:59.282: INFO: Got endpoints: latency-svc-xgl9c [751.53327ms] +Dec 20 07:55:59.299: INFO: Created: latency-svc-l9wvt +Dec 20 07:55:59.331: INFO: Got endpoints: latency-svc-crnmq [748.322936ms] +Dec 20 07:55:59.343: INFO: Created: latency-svc-64gsq +Dec 20 07:55:59.381: INFO: Got endpoints: latency-svc-fnt79 [749.715251ms] +Dec 20 07:55:59.397: INFO: Created: latency-svc-65mgv +Dec 20 07:55:59.430: INFO: Got endpoints: latency-svc-vhvl2 [748.867042ms] +Dec 20 07:55:59.447: INFO: Created: latency-svc-6qdwl +Dec 20 07:55:59.483: INFO: Got endpoints: latency-svc-zgrb2 [750.100939ms] +Dec 20 07:55:59.500: INFO: Created: latency-svc-wfkcz +Dec 20 07:55:59.532: INFO: Got endpoints: latency-svc-mc47w [748.136263ms] +Dec 20 07:55:59.552: INFO: Created: latency-svc-xs6z5 +Dec 20 07:55:59.582: INFO: Got endpoints: latency-svc-8vqx7 [751.572264ms] +Dec 20 07:55:59.611: INFO: Created: latency-svc-lcr97 +Dec 20 07:55:59.641: INFO: Got endpoints: latency-svc-djxc5 [760.052262ms] +Dec 20 07:55:59.658: INFO: Created: latency-svc-mr9ct +Dec 20 07:55:59.683: INFO: Got endpoints: latency-svc-82wx7 [752.401133ms] +Dec 20 07:55:59.701: INFO: Created: latency-svc-twx6v +Dec 20 07:55:59.732: INFO: Got endpoints: latency-svc-pph64 [751.541427ms] +Dec 20 07:55:59.748: INFO: Created: latency-svc-lctfj +Dec 20 07:55:59.780: INFO: Got endpoints: latency-svc-7qgp5 [750.262531ms] +Dec 20 07:55:59.792: INFO: Created: latency-svc-hvn7z +Dec 20 07:55:59.832: INFO: Got endpoints: latency-svc-s7r5f [747.490264ms] +Dec 20 07:55:59.845: INFO: Created: latency-svc-4pfcf +Dec 20 07:55:59.888: INFO: Got endpoints: latency-svc-6frwp [751.707581ms] +Dec 20 07:55:59.903: INFO: Created: latency-svc-64bmw +Dec 20 07:55:59.934: INFO: Got endpoints: latency-svc-hc8gf [753.627005ms] +Dec 20 07:55:59.950: INFO: Created: latency-svc-hzcnb +Dec 20 07:55:59.982: INFO: Got endpoints: latency-svc-sgpm6 [749.379981ms] +Dec 20 07:56:00.021: INFO: Created: latency-svc-mj9nc +Dec 20 07:56:00.033: INFO: Got endpoints: latency-svc-l9wvt [750.554577ms] +Dec 20 07:56:00.085: INFO: Created: latency-svc-pcxs6 +Dec 20 07:56:00.085: INFO: Got endpoints: latency-svc-64gsq [754.711183ms] +Dec 20 07:56:00.112: INFO: Created: latency-svc-5h7l2 +Dec 20 07:56:00.133: INFO: Got endpoints: latency-svc-65mgv [752.719632ms] +Dec 20 07:56:00.152: INFO: Created: latency-svc-99rkx +Dec 20 07:56:00.180: INFO: Got endpoints: latency-svc-6qdwl [749.380056ms] +Dec 20 07:56:00.193: INFO: Created: latency-svc-dscjs +Dec 20 07:56:00.233: INFO: Got endpoints: latency-svc-wfkcz [749.784972ms] +Dec 20 07:56:00.250: INFO: Created: latency-svc-9q9qb +Dec 20 07:56:00.282: INFO: Got endpoints: latency-svc-xs6z5 [750.207182ms] +Dec 20 07:56:00.298: INFO: Created: latency-svc-g9slr +Dec 20 07:56:00.330: INFO: Got endpoints: latency-svc-lcr97 [747.740864ms] +Dec 20 07:56:00.342: INFO: Created: latency-svc-v7xkh +Dec 20 07:56:00.382: INFO: Got endpoints: latency-svc-mr9ct [740.923953ms] +Dec 20 07:56:00.401: INFO: Created: latency-svc-kpp4x +Dec 20 07:56:00.433: INFO: Got endpoints: latency-svc-twx6v [750.602521ms] +Dec 20 07:56:00.447: INFO: Created: latency-svc-x2lrq +Dec 20 07:56:00.480: INFO: Got endpoints: latency-svc-lctfj [747.753336ms] +Dec 20 07:56:00.495: INFO: Created: latency-svc-2lnpw +Dec 20 07:56:00.532: INFO: Got endpoints: latency-svc-hvn7z [751.340723ms] +Dec 20 07:56:00.544: INFO: Created: latency-svc-p5m4z +Dec 20 07:56:00.587: INFO: Got endpoints: latency-svc-4pfcf [755.309778ms] +Dec 20 07:56:00.607: INFO: Created: latency-svc-4vzqr +Dec 20 07:56:00.632: INFO: Got endpoints: latency-svc-64bmw [744.385455ms] +Dec 20 07:56:00.647: INFO: Created: latency-svc-9b8kc +Dec 20 07:56:00.680: INFO: Got endpoints: latency-svc-hzcnb [745.776369ms] +Dec 20 07:56:00.703: INFO: Created: latency-svc-mnn4k +Dec 20 07:56:00.733: INFO: Got endpoints: latency-svc-mj9nc [751.259622ms] +Dec 20 07:56:00.753: INFO: Created: latency-svc-bkp2w +Dec 20 07:56:00.788: INFO: Got endpoints: latency-svc-pcxs6 [754.778488ms] +Dec 20 07:56:00.807: INFO: Created: latency-svc-hrjtr +Dec 20 07:56:00.837: INFO: Got endpoints: latency-svc-5h7l2 [751.262193ms] +Dec 20 07:56:00.854: INFO: Created: latency-svc-mmx7w +Dec 20 07:56:00.881: INFO: Got endpoints: latency-svc-99rkx [747.966622ms] +Dec 20 07:56:00.902: INFO: Created: latency-svc-vqdcp +Dec 20 07:56:00.932: INFO: Got endpoints: latency-svc-dscjs [751.862596ms] +Dec 20 07:56:00.945: INFO: Created: latency-svc-qfzt4 +Dec 20 07:56:00.981: INFO: Got endpoints: latency-svc-9q9qb [748.290594ms] +Dec 20 07:56:00.994: INFO: Created: latency-svc-wzvtj +Dec 20 07:56:01.032: INFO: Got endpoints: latency-svc-g9slr [749.314438ms] +Dec 20 07:56:01.044: INFO: Created: latency-svc-t5pgm +Dec 20 07:56:01.081: INFO: Got endpoints: latency-svc-v7xkh [751.127673ms] +Dec 20 07:56:01.105: INFO: Created: latency-svc-5vnr6 +Dec 20 07:56:01.138: INFO: Got endpoints: latency-svc-kpp4x [756.059177ms] +Dec 20 07:56:01.150: INFO: Created: latency-svc-xvwc4 +Dec 20 07:56:01.182: INFO: Got endpoints: latency-svc-x2lrq [748.630722ms] +Dec 20 07:56:01.200: INFO: Created: latency-svc-pzcgf +Dec 20 07:56:01.231: INFO: Got endpoints: latency-svc-2lnpw [751.341404ms] +Dec 20 07:56:01.252: INFO: Created: latency-svc-p7szm +Dec 20 07:56:01.282: INFO: Got endpoints: latency-svc-p5m4z [749.721135ms] +Dec 20 07:56:01.296: INFO: Created: latency-svc-g4484 +Dec 20 07:56:01.335: INFO: Got endpoints: latency-svc-4vzqr [748.312523ms] +Dec 20 07:56:01.355: INFO: Created: latency-svc-qxw7l +Dec 20 07:56:01.381: INFO: Got endpoints: latency-svc-9b8kc [748.48145ms] +Dec 20 07:56:01.401: INFO: Created: latency-svc-jx8l8 +Dec 20 07:56:01.432: INFO: Got endpoints: latency-svc-mnn4k [752.137409ms] +Dec 20 07:56:01.451: INFO: Created: latency-svc-gns5r +Dec 20 07:56:01.498: INFO: Got endpoints: latency-svc-bkp2w [764.290538ms] +Dec 20 07:56:01.516: INFO: Created: latency-svc-rg6lc +Dec 20 07:56:01.533: INFO: Got endpoints: latency-svc-hrjtr [745.362494ms] +Dec 20 07:56:01.549: INFO: Created: latency-svc-pb794 +Dec 20 07:56:01.581: INFO: Got endpoints: latency-svc-mmx7w [744.124678ms] +Dec 20 07:56:01.595: INFO: Created: latency-svc-xz5sv +Dec 20 07:56:01.633: INFO: Got endpoints: latency-svc-vqdcp [751.817632ms] +Dec 20 07:56:01.655: INFO: Created: latency-svc-r9kcx +Dec 20 07:56:01.681: INFO: Got endpoints: latency-svc-qfzt4 [749.690423ms] +Dec 20 07:56:01.695: INFO: Created: latency-svc-l7h22 +Dec 20 07:56:01.730: INFO: Got endpoints: latency-svc-wzvtj [748.595449ms] +Dec 20 07:56:01.742: INFO: Created: latency-svc-f4phq +Dec 20 07:56:01.781: INFO: Got endpoints: latency-svc-t5pgm [749.175863ms] +Dec 20 07:56:01.794: INFO: Created: latency-svc-4zlq2 +Dec 20 07:56:01.832: INFO: Got endpoints: latency-svc-5vnr6 [751.026114ms] +Dec 20 07:56:01.849: INFO: Created: latency-svc-wfsjg +Dec 20 07:56:01.882: INFO: Got endpoints: latency-svc-xvwc4 [743.573313ms] +Dec 20 07:56:01.907: INFO: Created: latency-svc-8mxrs +Dec 20 07:56:01.932: INFO: Got endpoints: latency-svc-pzcgf [749.976454ms] +Dec 20 07:56:01.953: INFO: Created: latency-svc-zh2g2 +Dec 20 07:56:01.980: INFO: Got endpoints: latency-svc-p7szm [748.888303ms] +Dec 20 07:56:02.001: INFO: Created: latency-svc-zqbh6 +Dec 20 07:56:02.032: INFO: Got endpoints: latency-svc-g4484 [750.160365ms] +Dec 20 07:56:02.051: INFO: Created: latency-svc-tssxf +Dec 20 07:56:02.082: INFO: Got endpoints: latency-svc-qxw7l [746.358136ms] +Dec 20 07:56:02.101: INFO: Created: latency-svc-2k6gs +Dec 20 07:56:02.138: INFO: Got endpoints: latency-svc-jx8l8 [756.927837ms] +Dec 20 07:56:02.161: INFO: Created: latency-svc-cjs6q +Dec 20 07:56:02.180: INFO: Got endpoints: latency-svc-gns5r [747.060923ms] +Dec 20 07:56:02.196: INFO: Created: latency-svc-thkfs +Dec 20 07:56:02.231: INFO: Got endpoints: latency-svc-rg6lc [730.939865ms] +Dec 20 07:56:02.253: INFO: Created: latency-svc-gsbbg +Dec 20 07:56:02.283: INFO: Got endpoints: latency-svc-pb794 [749.952859ms] +Dec 20 07:56:02.306: INFO: Created: latency-svc-hknlj +Dec 20 07:56:02.332: INFO: Got endpoints: latency-svc-xz5sv [750.500993ms] +Dec 20 07:56:02.350: INFO: Created: latency-svc-xb7tt +Dec 20 07:56:02.381: INFO: Got endpoints: latency-svc-r9kcx [747.74395ms] +Dec 20 07:56:02.396: INFO: Created: latency-svc-6nxjq +Dec 20 07:56:02.432: INFO: Got endpoints: latency-svc-l7h22 [750.140901ms] +Dec 20 07:56:02.445: INFO: Created: latency-svc-pjqnx +Dec 20 07:56:02.481: INFO: Got endpoints: latency-svc-f4phq [750.5934ms] +Dec 20 07:56:02.502: INFO: Created: latency-svc-schvc +Dec 20 07:56:02.530: INFO: Got endpoints: latency-svc-4zlq2 [748.901625ms] +Dec 20 07:56:02.548: INFO: Created: latency-svc-qnvt8 +Dec 20 07:56:02.585: INFO: Got endpoints: latency-svc-wfsjg [752.562405ms] +Dec 20 07:56:02.604: INFO: Created: latency-svc-lqs4x +Dec 20 07:56:02.631: INFO: Got endpoints: latency-svc-8mxrs [749.233679ms] +Dec 20 07:56:02.644: INFO: Created: latency-svc-rzvh9 +Dec 20 07:56:02.682: INFO: Got endpoints: latency-svc-zh2g2 [748.989459ms] +Dec 20 07:56:02.695: INFO: Created: latency-svc-mf65d +Dec 20 07:56:02.738: INFO: Got endpoints: latency-svc-zqbh6 [757.535553ms] +Dec 20 07:56:02.754: INFO: Created: latency-svc-2lskj +Dec 20 07:56:02.780: INFO: Got endpoints: latency-svc-tssxf [747.797423ms] +Dec 20 07:56:02.810: INFO: Created: latency-svc-b72z6 +Dec 20 07:56:02.831: INFO: Got endpoints: latency-svc-2k6gs [748.929116ms] +Dec 20 07:56:02.846: INFO: Created: latency-svc-7x5sl +Dec 20 07:56:02.881: INFO: Got endpoints: latency-svc-cjs6q [743.539995ms] +Dec 20 07:56:02.899: INFO: Created: latency-svc-9764l +Dec 20 07:56:02.935: INFO: Got endpoints: latency-svc-thkfs [755.678931ms] +Dec 20 07:56:02.952: INFO: Created: latency-svc-ssszp +Dec 20 07:56:02.980: INFO: Got endpoints: latency-svc-gsbbg [749.323559ms] +Dec 20 07:56:03.000: INFO: Created: latency-svc-sc4c7 +Dec 20 07:56:03.034: INFO: Got endpoints: latency-svc-hknlj [750.89463ms] +Dec 20 07:56:03.051: INFO: Created: latency-svc-lbq7p +Dec 20 07:56:03.086: INFO: Got endpoints: latency-svc-xb7tt [754.714695ms] +Dec 20 07:56:03.111: INFO: Created: latency-svc-4kcm5 +Dec 20 07:56:03.131: INFO: Got endpoints: latency-svc-6nxjq [749.667227ms] +Dec 20 07:56:03.149: INFO: Created: latency-svc-72m64 +Dec 20 07:56:03.181: INFO: Got endpoints: latency-svc-pjqnx [749.279142ms] +Dec 20 07:56:03.233: INFO: Got endpoints: latency-svc-schvc [752.225278ms] +Dec 20 07:56:03.284: INFO: Got endpoints: latency-svc-qnvt8 [753.932577ms] +Dec 20 07:56:03.334: INFO: Got endpoints: latency-svc-lqs4x [749.176612ms] +Dec 20 07:56:03.382: INFO: Got endpoints: latency-svc-rzvh9 [750.92014ms] +Dec 20 07:56:03.430: INFO: Got endpoints: latency-svc-mf65d [748.198902ms] +Dec 20 07:56:03.482: INFO: Got endpoints: latency-svc-2lskj [743.442687ms] +Dec 20 07:56:03.532: INFO: Got endpoints: latency-svc-b72z6 [752.427504ms] +Dec 20 07:56:03.587: INFO: Got endpoints: latency-svc-7x5sl [756.255817ms] +Dec 20 07:56:03.631: INFO: Got endpoints: latency-svc-9764l [749.608125ms] +Dec 20 07:56:03.682: INFO: Got endpoints: latency-svc-ssszp [746.531905ms] +Dec 20 07:56:03.730: INFO: Got endpoints: latency-svc-sc4c7 [749.683119ms] +Dec 20 07:56:03.782: INFO: Got endpoints: latency-svc-lbq7p [747.969781ms] +Dec 20 07:56:03.831: INFO: Got endpoints: latency-svc-4kcm5 [744.02077ms] +Dec 20 07:56:03.885: INFO: Got endpoints: latency-svc-72m64 [754.195817ms] +Dec 20 07:56:03.885: INFO: Latencies: [23.803704ms 31.857173ms 34.743276ms 43.304438ms 51.957229ms 83.896756ms 93.301204ms 100.397452ms 117.969625ms 130.420872ms 139.777972ms 151.941084ms 160.079647ms 163.525081ms 169.396485ms 172.314478ms 174.639813ms 174.784164ms 175.708541ms 176.136553ms 176.163206ms 176.340136ms 178.161694ms 179.345768ms 180.039531ms 180.269977ms 180.406575ms 180.641571ms 181.122521ms 183.221462ms 184.220286ms 186.279229ms 190.090204ms 190.577579ms 190.93129ms 193.162609ms 194.632468ms 202.644295ms 202.804975ms 230.107974ms 276.641791ms 315.545295ms 354.840874ms 421.597517ms 435.02357ms 463.836553ms 499.092014ms 550.703118ms 586.847889ms 626.295845ms 664.73125ms 702.037917ms 730.939865ms 740.923953ms 740.989859ms 741.033983ms 742.041353ms 742.278651ms 743.176944ms 743.442687ms 743.498732ms 743.539995ms 743.573313ms 743.69562ms 744.02077ms 744.124678ms 744.385455ms 744.54894ms 744.849429ms 745.362494ms 745.387601ms 745.776369ms 746.358136ms 746.528171ms 746.531905ms 746.541856ms 746.897644ms 747.060923ms 747.078085ms 747.490264ms 747.656291ms 747.740864ms 747.74395ms 747.753336ms 747.753413ms 747.797423ms 747.865114ms 747.900834ms 747.966622ms 747.969781ms 747.995088ms 748.000316ms 748.136263ms 748.198902ms 748.290594ms 748.312523ms 748.322936ms 748.48145ms 748.595449ms 748.630722ms 748.648472ms 748.704435ms 748.867042ms 748.888303ms 748.901625ms 748.929116ms 748.989459ms 749.175863ms 749.176612ms 749.205545ms 749.233679ms 749.279142ms 749.283325ms 749.30096ms 749.314438ms 749.323559ms 749.341081ms 749.345281ms 749.379981ms 749.380056ms 749.462276ms 749.608125ms 749.66157ms 749.667227ms 749.683119ms 749.690423ms 749.715251ms 749.721135ms 749.784972ms 749.794078ms 749.80766ms 749.952859ms 749.976454ms 750.100939ms 750.140901ms 750.160365ms 750.170806ms 750.173599ms 750.207182ms 750.228084ms 750.262531ms 750.279525ms 750.500993ms 750.554577ms 750.5934ms 750.602521ms 750.699036ms 750.89463ms 750.918498ms 750.92014ms 751.026114ms 751.127673ms 751.130007ms 751.206056ms 751.259622ms 751.262193ms 751.309484ms 751.340723ms 751.341404ms 751.530868ms 751.53327ms 751.541427ms 751.547363ms 751.572264ms 751.707581ms 751.723825ms 751.817632ms 751.830498ms 751.862596ms 752.061188ms 752.137409ms 752.225278ms 752.389825ms 752.401133ms 752.427504ms 752.562405ms 752.719632ms 753.202256ms 753.205414ms 753.337987ms 753.627005ms 753.838778ms 753.932577ms 754.195817ms 754.339563ms 754.711183ms 754.714695ms 754.778488ms 755.255594ms 755.309778ms 755.6611ms 755.678931ms 756.059177ms 756.255817ms 756.469957ms 756.927837ms 757.472059ms 757.535553ms 760.052262ms 764.290538ms] +Dec 20 07:56:03.885: INFO: 50 %ile: 748.648472ms +Dec 20 07:56:03.885: INFO: 90 %ile: 753.627005ms +Dec 20 07:56:03.886: INFO: 99 %ile: 760.052262ms +Dec 20 07:56:03.886: INFO: Total sample count: 200 +[AfterEach] [sig-network] Service endpoints latency + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:56:03.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-svc-latency-chqxn" for this suite. +Dec 20 07:56:19.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:56:20.003: INFO: namespace: e2e-tests-svc-latency-chqxn, resource: bindings, ignored listing per whitelist +Dec 20 07:56:20.121: INFO: namespace e2e-tests-svc-latency-chqxn deletion completed in 16.22817932s + +• [SLOW TEST:31.096 seconds] +[sig-network] Service endpoints latency +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should not be very high [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should rollback without unnecessary restarts [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:56:20.122: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 +[It] should rollback without unnecessary restarts [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +Dec 20 07:56:20.332: INFO: Requires at least 2 nodes (not -1) +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 +Dec 20 07:56:20.355: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-rmsj9/daemonsets","resourceVersion":"957771"},"items":null} + +Dec 20 07:56:20.359: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-rmsj9/pods","resourceVersion":"957771"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:56:20.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-daemonsets-rmsj9" for this suite. +Dec 20 07:56:26.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:56:26.461: INFO: namespace: e2e-tests-daemonsets-rmsj9, resource: bindings, ignored listing per whitelist +Dec 20 07:56:26.536: INFO: namespace e2e-tests-daemonsets-rmsj9 deletion completed in 6.151905597s + +S [SKIPPING] [6.414 seconds] +[sig-apps] Daemon set [Serial] +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should rollback without unnecessary restarts [Conformance] [It] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 + + Dec 20 07:56:20.332: Requires at least 2 nodes (not -1) + + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 +------------------------------ +SSS +------------------------------ +[sig-storage] Downward API volume + should update annotations on modification [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:56:26.536: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating the pod +Dec 20 07:56:31.198: INFO: Successfully updated pod "annotationupdatec08196db-042c-11e9-b141-0a58ac1c1472" +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:56:33.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-downward-api-sphtr" for this suite. +Dec 20 07:56:55.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:56:55.348: INFO: namespace: e2e-tests-downward-api-sphtr, resource: bindings, ignored listing per whitelist +Dec 20 07:56:55.356: INFO: namespace e2e-tests-downward-api-sphtr deletion completed in 22.126279057s + +• [SLOW TEST:28.820 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should update annotations on modification [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:56:55.357: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating secret with name secret-test-map-d1af25f5-042c-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test consume secrets +Dec 20 07:56:55.460: INFO: Waiting up to 5m0s for pod "pod-secrets-d1afb929-042c-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-secrets-jq8j9" to be "success or failure" +Dec 20 07:56:55.466: INFO: Pod "pod-secrets-d1afb929-042c-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 6.240424ms +Dec 20 07:56:57.472: INFO: Pod "pod-secrets-d1afb929-042c-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0121736s +Dec 20 07:56:59.477: INFO: Pod "pod-secrets-d1afb929-042c-11e9-b141-0a58ac1c1472": Phase="Running", Reason="", readiness=true. Elapsed: 4.017489803s +Dec 20 07:57:01.481: INFO: Pod "pod-secrets-d1afb929-042c-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021343396s +STEP: Saw pod success +Dec 20 07:57:01.481: INFO: Pod "pod-secrets-d1afb929-042c-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:57:01.484: INFO: Trying to get logs from node 10-6-155-34 pod pod-secrets-d1afb929-042c-11e9-b141-0a58ac1c1472 container secret-volume-test: +STEP: delete the pod +Dec 20 07:57:01.504: INFO: Waiting for pod pod-secrets-d1afb929-042c-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:57:01.508: INFO: Pod pod-secrets-d1afb929-042c-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:57:01.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-secrets-jq8j9" for this suite. +Dec 20 07:57:07.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:57:07.611: INFO: namespace: e2e-tests-secrets-jq8j9, resource: bindings, ignored listing per whitelist +Dec 20 07:57:07.663: INFO: namespace e2e-tests-secrets-jq8j9 deletion completed in 6.148942774s + +• [SLOW TEST:12.306 seconds] +[sig-storage] Secrets +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 + should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSS +------------------------------ +[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:57:07.663: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 +STEP: create the container to handle the HTTPGet hook request. +[It] should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: create the pod with lifecycle hook +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Dec 20 07:57:19.850: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Dec 20 07:57:19.854: INFO: Pod pod-with-poststart-http-hook no longer exists +[AfterEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:57:19.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-txdv7" for this suite. +Dec 20 07:57:41.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:57:42.022: INFO: namespace: e2e-tests-container-lifecycle-hook-txdv7, resource: bindings, ignored listing per whitelist +Dec 20 07:57:42.025: INFO: namespace e2e-tests-container-lifecycle-hook-txdv7 deletion completed in 22.163201231s + +• [SLOW TEST:34.362 seconds] +[k8s.io] Container Lifecycle Hook +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + when create a pod with lifecycle hook + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:57:42.025: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Dec 20 07:57:42.139: INFO: Waiting up to 5m0s for pod "pod-ed824bd8-042c-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-emptydir-lqhzp" to be "success or failure" +Dec 20 07:57:42.149: INFO: Pod "pod-ed824bd8-042c-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 10.00128ms +Dec 20 07:57:44.153: INFO: Pod "pod-ed824bd8-042c-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014428613s +Dec 20 07:57:46.166: INFO: Pod "pod-ed824bd8-042c-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027238447s +STEP: Saw pod success +Dec 20 07:57:46.166: INFO: Pod "pod-ed824bd8-042c-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:57:46.171: INFO: Trying to get logs from node 10-6-155-34 pod pod-ed824bd8-042c-11e9-b141-0a58ac1c1472 container test-container: +STEP: delete the pod +Dec 20 07:57:46.193: INFO: Waiting for pod pod-ed824bd8-042c-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:57:46.197: INFO: Pod pod-ed824bd8-042c-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:57:46.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-emptydir-lqhzp" for this suite. +Dec 20 07:57:52.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:57:52.280: INFO: namespace: e2e-tests-emptydir-lqhzp, resource: bindings, ignored listing per whitelist +Dec 20 07:57:52.369: INFO: namespace e2e-tests-emptydir-lqhzp deletion completed in 6.161663769s + +• [SLOW TEST:10.344 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 + should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:57:52.369: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating configMap with name configmap-test-volume-map-f3ad6282-042c-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test consume configMaps +Dec 20 07:57:52.492: INFO: Waiting up to 5m0s for pod "pod-configmaps-f3ae013b-042c-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-configmap-l6jgn" to be "success or failure" +Dec 20 07:57:52.496: INFO: Pod "pod-configmaps-f3ae013b-042c-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086957ms +Dec 20 07:57:54.502: INFO: Pod "pod-configmaps-f3ae013b-042c-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009780781s +Dec 20 07:57:56.507: INFO: Pod "pod-configmaps-f3ae013b-042c-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014574888s +STEP: Saw pod success +Dec 20 07:57:56.507: INFO: Pod "pod-configmaps-f3ae013b-042c-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 07:57:56.510: INFO: Trying to get logs from node 10-6-155-34 pod pod-configmaps-f3ae013b-042c-11e9-b141-0a58ac1c1472 container configmap-volume-test: +STEP: delete the pod +Dec 20 07:57:56.540: INFO: Waiting for pod pod-configmaps-f3ae013b-042c-11e9-b141-0a58ac1c1472 to disappear +Dec 20 07:57:56.543: INFO: Pod pod-configmaps-f3ae013b-042c-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:57:56.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-configmap-l6jgn" for this suite. +Dec 20 07:58:02.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:58:02.652: INFO: namespace: e2e-tests-configmap-l6jgn, resource: bindings, ignored listing per whitelist +Dec 20 07:58:02.689: INFO: namespace e2e-tests-configmap-l6jgn deletion completed in 6.132597145s + +• [SLOW TEST:10.320 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-storage] ConfigMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:58:02.689: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating configMap with name cm-test-opt-del-f9d4b867-042c-11e9-b141-0a58ac1c1472 +STEP: Creating configMap with name cm-test-opt-upd-f9d4b8cd-042c-11e9-b141-0a58ac1c1472 +STEP: Creating the pod +STEP: Deleting configmap cm-test-opt-del-f9d4b867-042c-11e9-b141-0a58ac1c1472 +STEP: Updating configmap cm-test-opt-upd-f9d4b8cd-042c-11e9-b141-0a58ac1c1472 +STEP: Creating configMap with name cm-test-opt-create-f9d4b8f4-042c-11e9-b141-0a58ac1c1472 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 07:59:37.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-configmap-tgrsv" for this suite. +Dec 20 07:59:59.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 07:59:59.608: INFO: namespace: e2e-tests-configmap-tgrsv, resource: bindings, ignored listing per whitelist +Dec 20 07:59:59.674: INFO: namespace e2e-tests-configmap-tgrsv deletion completed in 22.160485536s + +• [SLOW TEST:116.985 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 07:59:59.674: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating configMap with name cm-test-opt-del-3f9053ba-042d-11e9-b141-0a58ac1c1472 +STEP: Creating configMap with name cm-test-opt-upd-3f90542c-042d-11e9-b141-0a58ac1c1472 +STEP: Creating the pod +STEP: Deleting configmap cm-test-opt-del-3f9053ba-042d-11e9-b141-0a58ac1c1472 +STEP: Updating configmap cm-test-opt-upd-3f90542c-042d-11e9-b141-0a58ac1c1472 +STEP: Creating configMap with name cm-test-opt-create-3f905452-042d-11e9-b141-0a58ac1c1472 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:00:07.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-dgmhc" for this suite. +Dec 20 08:00:29.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:00:30.067: INFO: namespace: e2e-tests-projected-dgmhc, resource: bindings, ignored listing per whitelist +Dec 20 08:00:30.076: INFO: namespace e2e-tests-projected-dgmhc deletion completed in 22.134284337s + +• [SLOW TEST:30.401 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop simple daemon [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:00:30.078: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 +[It] should run and stop simple daemon [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Dec 20 08:00:30.227: INFO: Number of nodes with available pods: 0 +Dec 20 08:00:30.227: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:00:31.238: INFO: Number of nodes with available pods: 0 +Dec 20 08:00:31.238: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:00:32.244: INFO: Number of nodes with available pods: 0 +Dec 20 08:00:32.244: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:00:33.244: INFO: Number of nodes with available pods: 0 +Dec 20 08:00:33.244: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:00:34.242: INFO: Number of nodes with available pods: 1 +Dec 20 08:00:34.242: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:00:35.238: INFO: Number of nodes with available pods: 2 +Dec 20 08:00:35.238: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Stop a daemon pod, check that the daemon pod is revived. +Dec 20 08:00:35.263: INFO: Number of nodes with available pods: 1 +Dec 20 08:00:35.263: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:00:36.282: INFO: Number of nodes with available pods: 1 +Dec 20 08:00:36.283: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:00:37.280: INFO: Number of nodes with available pods: 1 +Dec 20 08:00:37.280: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:00:38.277: INFO: Number of nodes with available pods: 1 +Dec 20 08:00:47.279: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:00:48.273: INFO: Number of nodes with available pods: 1 +Dec 20 08:00:48.273: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:00:49.274: INFO: Number of nodes with available pods: 1 +Dec 20 08:00:49.274: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:00:50.273: INFO: Number of nodes with available pods: 1 +Dec 20 08:00:50.273: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:00:51.285: INFO: Number of nodes with available pods: 1 +Dec 20 08:00:51.285: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:00:52.282: INFO: Number of nodes with available pods: 1 +Dec 20 08:00:58.275: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:00:59.275: INFO: Number of nodes with available pods: 1 +Dec 20 08:00:59.275: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:01:00.273: INFO: Number of nodes with available pods: 1 +Dec 20 08:01:00.273: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:01:01.279: INFO: Number of nodes with available pods: 1 +Dec 20 08:01:01.279: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:01:02.273: INFO: Number of nodes with available pods: 1 +Dec 20 08:01:08.277: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:01:09.282: INFO: Number of nodes with available pods: 1 +Dec 20 08:01:09.282: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:01:10.286: INFO: Number of nodes with available pods: 1 +Dec 20 08:01:12.286: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:01:13.274: INFO: Number of nodes with available pods: 2 +Dec 20 08:01:13.274: INFO: Number of running nodes: 2, number of available pods: 2 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-c8qsb, will wait for the garbage collector to delete the pods +Dec 20 08:01:13.355: INFO: Deleting DaemonSet.extensions daemon-set took: 18.265321ms +Dec 20 08:01:13.457: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.33826ms +Dec 20 08:01:49.170: INFO: Number of nodes with available pods: 0 +Dec 20 08:01:49.170: INFO: Number of running nodes: 0, number of available pods: 0 +Dec 20 08:01:49.179: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-c8qsb/daemonsets","resourceVersion":"958589"},"items":null} + +Dec 20 08:01:49.183: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-c8qsb/pods","resourceVersion":"958589"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:01:49.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-daemonsets-c8qsb" for this suite. +Dec 20 08:01:55.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:01:55.358: INFO: namespace: e2e-tests-daemonsets-c8qsb, resource: bindings, ignored listing per whitelist +Dec 20 08:01:55.373: INFO: namespace e2e-tests-daemonsets-c8qsb deletion completed in 6.170827329s + +• [SLOW TEST:85.296 seconds] +[sig-apps] Daemon set [Serial] +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should run and stop simple daemon [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should invoke init containers on a RestartNever pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:01:55.374: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 +[It] should invoke init containers on a RestartNever pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating the pod +Dec 20 08:01:55.503: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:02:01.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-init-container-dljb5" for this suite. +Dec 20 08:02:07.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:02:07.316: INFO: namespace: e2e-tests-init-container-dljb5, resource: bindings, ignored listing per whitelist +Dec 20 08:02:07.415: INFO: namespace e2e-tests-init-container-dljb5 deletion completed in 6.166028159s + +• [SLOW TEST:12.041 seconds] +[k8s.io] InitContainer [NodeConformance] +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should invoke init containers on a RestartNever pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-storage] Projected secret + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:02:07.415: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating secret with name s-test-opt-del-8bb2ad1a-042d-11e9-b141-0a58ac1c1472 +STEP: Creating secret with name s-test-opt-upd-8bb2ad87-042d-11e9-b141-0a58ac1c1472 +STEP: Creating the pod +STEP: Deleting secret s-test-opt-del-8bb2ad1a-042d-11e9-b141-0a58ac1c1472 +STEP: Updating secret s-test-opt-upd-8bb2ad87-042d-11e9-b141-0a58ac1c1472 +STEP: Creating secret with name s-test-opt-create-8bb2adc6-042d-11e9-b141-0a58ac1c1472 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:02:15.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-d7zhh" for this suite. +Dec 20 08:02:37.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:02:37.911: INFO: namespace: e2e-tests-projected-d7zhh, resource: bindings, ignored listing per whitelist +Dec 20 08:02:37.916: INFO: namespace e2e-tests-projected-d7zhh deletion completed in 22.198704552s + +• [SLOW TEST:30.501 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:02:37.916: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward API volume plugin +Dec 20 08:02:38.078: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9de6f43b-042d-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-5xf2v" to be "success or failure" +Dec 20 08:02:38.085: INFO: Pod "downwardapi-volume-9de6f43b-042d-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 6.77034ms +Dec 20 08:02:40.094: INFO: Pod "downwardapi-volume-9de6f43b-042d-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015033551s +Dec 20 08:02:42.099: INFO: Pod "downwardapi-volume-9de6f43b-042d-11e9-b141-0a58ac1c1472": Phase="Running", Reason="", readiness=true. Elapsed: 4.020339849s +Dec 20 08:02:44.103: INFO: Pod "downwardapi-volume-9de6f43b-042d-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02485896s +STEP: Saw pod success +Dec 20 08:02:44.103: INFO: Pod "downwardapi-volume-9de6f43b-042d-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:02:44.107: INFO: Trying to get logs from node 10-6-155-34 pod downwardapi-volume-9de6f43b-042d-11e9-b141-0a58ac1c1472 container client-container: +STEP: delete the pod +Dec 20 08:02:44.130: INFO: Waiting for pod downwardapi-volume-9de6f43b-042d-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:02:44.135: INFO: Pod downwardapi-volume-9de6f43b-042d-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:02:44.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-5xf2v" for this suite. +Dec 20 08:02:50.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:02:50.217: INFO: namespace: e2e-tests-projected-5xf2v, resource: bindings, ignored listing per whitelist +Dec 20 08:02:50.282: INFO: namespace e2e-tests-projected-5xf2v deletion completed in 6.129425225s + +• [SLOW TEST:12.366 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:02:50.282: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename namespaces +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a test namespace +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a service in the namespace +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Verifying there is no service in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:02:56.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-namespaces-gt2w2" for this suite. +Dec 20 08:03:02.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:03:02.545: INFO: namespace: e2e-tests-namespaces-gt2w2, resource: bindings, ignored listing per whitelist +Dec 20 08:03:02.635: INFO: namespace e2e-tests-namespaces-gt2w2 deletion completed in 6.135262762s +STEP: Destroying namespace "e2e-tests-nsdeletetest-dghl5" for this suite. +Dec 20 08:03:02.638: INFO: Namespace e2e-tests-nsdeletetest-dghl5 was already deleted +STEP: Destroying namespace "e2e-tests-nsdeletetest-qgnk6" for this suite. +Dec 20 08:03:08.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:03:08.726: INFO: namespace: e2e-tests-nsdeletetest-qgnk6, resource: bindings, ignored listing per whitelist +Dec 20 08:03:08.771: INFO: namespace e2e-tests-nsdeletetest-qgnk6 deletion completed in 6.132919998s + +• [SLOW TEST:18.489 seconds] +[sig-api-machinery] Namespaces [Serial] +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSS +------------------------------ +[k8s.io] Docker Containers + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Docker Containers + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:03:08.771: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test use defaults +Dec 20 08:03:08.895: INFO: Waiting up to 5m0s for pod "client-containers-b044bd70-042d-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-containers-68tsc" to be "success or failure" +Dec 20 08:03:08.899: INFO: Pod "client-containers-b044bd70-042d-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 3.596349ms +Dec 20 08:03:10.906: INFO: Pod "client-containers-b044bd70-042d-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010639024s +Dec 20 08:03:12.916: INFO: Pod "client-containers-b044bd70-042d-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020365844s +STEP: Saw pod success +Dec 20 08:03:12.916: INFO: Pod "client-containers-b044bd70-042d-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:03:12.927: INFO: Trying to get logs from node 10-6-155-34 pod client-containers-b044bd70-042d-11e9-b141-0a58ac1c1472 container test-container: +STEP: delete the pod +Dec 20 08:03:12.951: INFO: Waiting for pod client-containers-b044bd70-042d-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:03:12.953: INFO: Pod client-containers-b044bd70-042d-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:03:12.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-containers-68tsc" for this suite. +Dec 20 08:03:18.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:03:19.101: INFO: namespace: e2e-tests-containers-68tsc, resource: bindings, ignored listing per whitelist +Dec 20 08:03:19.108: INFO: namespace e2e-tests-containers-68tsc deletion completed in 6.147139928s + +• [SLOW TEST:10.337 seconds] +[k8s.io] Docker Containers +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:03:19.109: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 +[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +Dec 20 08:03:19.261: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: creating the pod +STEP: submitting the pod to kubernetes +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:03:23.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-pods-mz4vm" for this suite. +Dec 20 08:04:03.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:04:03.457: INFO: namespace: e2e-tests-pods-mz4vm, resource: bindings, ignored listing per whitelist +Dec 20 08:04:03.466: INFO: namespace e2e-tests-pods-mz4vm deletion completed in 40.137595564s + +• [SLOW TEST:44.358 seconds] +[k8s.io] Pods +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-node] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:04:03.467: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating configMap e2e-tests-configmap-dk4mh/configmap-test-d0dd04fd-042d-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test consume configMaps +Dec 20 08:04:03.581: INFO: Waiting up to 5m0s for pod "pod-configmaps-d0dd9b22-042d-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-configmap-dk4mh" to be "success or failure" +Dec 20 08:04:03.587: INFO: Pod "pod-configmaps-d0dd9b22-042d-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048981ms +Dec 20 08:04:05.593: INFO: Pod "pod-configmaps-d0dd9b22-042d-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011967453s +Dec 20 08:04:07.597: INFO: Pod "pod-configmaps-d0dd9b22-042d-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015855381s +STEP: Saw pod success +Dec 20 08:04:07.597: INFO: Pod "pod-configmaps-d0dd9b22-042d-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:04:07.600: INFO: Trying to get logs from node 10-6-155-34 pod pod-configmaps-d0dd9b22-042d-11e9-b141-0a58ac1c1472 container env-test: +STEP: delete the pod +Dec 20 08:04:07.623: INFO: Waiting for pod pod-configmaps-d0dd9b22-042d-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:04:07.628: INFO: Pod pod-configmaps-d0dd9b22-042d-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:04:07.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-configmap-dk4mh" for this suite. +Dec 20 08:04:13.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:04:13.695: INFO: namespace: e2e-tests-configmap-dk4mh, resource: bindings, ignored listing per whitelist +Dec 20 08:04:13.763: INFO: namespace e2e-tests-configmap-dk4mh deletion completed in 6.126793936s + +• [SLOW TEST:10.296 seconds] +[sig-node] ConfigMap +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 + should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Update Demo + should scale a replication controller [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:04:13.763: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[BeforeEach] [k8s.io] Update Demo + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 +[It] should scale a replication controller [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating a replication controller +Dec 20 08:04:13.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 create -f - --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:14.551: INFO: stderr: "" +Dec 20 08:04:14.551: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Dec 20 08:04:14.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:14.706: INFO: stderr: "" +Dec 20 08:04:14.706: INFO: stdout: "update-demo-nautilus-cz7zc update-demo-nautilus-sxknf " +Dec 20 08:04:14.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-cz7zc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:14.869: INFO: stderr: "" +Dec 20 08:04:14.869: INFO: stdout: "" +Dec 20 08:04:14.869: INFO: update-demo-nautilus-cz7zc is created but not running +Dec 20 08:04:19.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:20.021: INFO: stderr: "" +Dec 20 08:04:20.021: INFO: stdout: "update-demo-nautilus-cz7zc update-demo-nautilus-sxknf " +Dec 20 08:04:20.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-cz7zc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:20.193: INFO: stderr: "" +Dec 20 08:04:20.193: INFO: stdout: "true" +Dec 20 08:04:20.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-cz7zc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:20.340: INFO: stderr: "" +Dec 20 08:04:20.340: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Dec 20 08:04:20.340: INFO: validating pod update-demo-nautilus-cz7zc +Dec 20 08:04:20.349: INFO: got data: { + "image": "nautilus.jpg" +} + +Dec 20 08:04:20.349: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Dec 20 08:04:20.349: INFO: update-demo-nautilus-cz7zc is verified up and running +Dec 20 08:04:20.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-sxknf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:20.479: INFO: stderr: "" +Dec 20 08:04:20.479: INFO: stdout: "true" +Dec 20 08:04:20.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-sxknf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:20.601: INFO: stderr: "" +Dec 20 08:04:20.601: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Dec 20 08:04:20.601: INFO: validating pod update-demo-nautilus-sxknf +Dec 20 08:04:20.612: INFO: got data: { + "image": "nautilus.jpg" +} + +Dec 20 08:04:20.613: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Dec 20 08:04:20.613: INFO: update-demo-nautilus-sxknf is verified up and running +STEP: scaling down the replication controller +Dec 20 08:04:20.613: INFO: scanned /root for discovery docs: +Dec 20 08:04:20.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:21.812: INFO: stderr: "" +Dec 20 08:04:21.812: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Dec 20 08:04:21.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:21.953: INFO: stderr: "" +Dec 20 08:04:21.953: INFO: stdout: "update-demo-nautilus-cz7zc update-demo-nautilus-sxknf " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Dec 20 08:04:26.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:27.099: INFO: stderr: "" +Dec 20 08:04:27.099: INFO: stdout: "update-demo-nautilus-cz7zc update-demo-nautilus-sxknf " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Dec 20 08:04:32.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:32.227: INFO: stderr: "" +Dec 20 08:04:32.227: INFO: stdout: "update-demo-nautilus-cz7zc " +Dec 20 08:04:32.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-cz7zc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:32.393: INFO: stderr: "" +Dec 20 08:04:32.393: INFO: stdout: "true" +Dec 20 08:04:32.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-cz7zc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:32.559: INFO: stderr: "" +Dec 20 08:04:32.559: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Dec 20 08:04:32.559: INFO: validating pod update-demo-nautilus-cz7zc +Dec 20 08:04:32.565: INFO: got data: { + "image": "nautilus.jpg" +} + +Dec 20 08:04:32.565: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Dec 20 08:04:32.565: INFO: update-demo-nautilus-cz7zc is verified up and running +STEP: scaling up the replication controller +Dec 20 08:04:32.566: INFO: scanned /root for discovery docs: +Dec 20 08:04:32.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:33.752: INFO: stderr: "" +Dec 20 08:04:33.752: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Dec 20 08:04:33.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:33.898: INFO: stderr: "" +Dec 20 08:04:33.898: INFO: stdout: "update-demo-nautilus-cz7zc update-demo-nautilus-ql4j9 " +Dec 20 08:04:33.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-cz7zc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:34.046: INFO: stderr: "" +Dec 20 08:04:34.046: INFO: stdout: "true" +Dec 20 08:04:34.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-cz7zc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:34.182: INFO: stderr: "" +Dec 20 08:04:34.182: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Dec 20 08:04:34.182: INFO: validating pod update-demo-nautilus-cz7zc +Dec 20 08:04:34.186: INFO: got data: { + "image": "nautilus.jpg" +} + +Dec 20 08:04:34.186: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Dec 20 08:04:34.186: INFO: update-demo-nautilus-cz7zc is verified up and running +Dec 20 08:04:34.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-ql4j9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:34.329: INFO: stderr: "" +Dec 20 08:04:34.329: INFO: stdout: "" +Dec 20 08:04:34.329: INFO: update-demo-nautilus-ql4j9 is created but not running +Dec 20 08:04:39.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:39.508: INFO: stderr: "" +Dec 20 08:04:39.508: INFO: stdout: "update-demo-nautilus-cz7zc update-demo-nautilus-ql4j9 " +Dec 20 08:04:39.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-cz7zc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:39.651: INFO: stderr: "" +Dec 20 08:04:39.651: INFO: stdout: "true" +Dec 20 08:04:39.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-cz7zc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:39.811: INFO: stderr: "" +Dec 20 08:04:39.811: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Dec 20 08:04:39.811: INFO: validating pod update-demo-nautilus-cz7zc +Dec 20 08:04:39.820: INFO: got data: { + "image": "nautilus.jpg" +} + +Dec 20 08:04:39.820: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Dec 20 08:04:39.820: INFO: update-demo-nautilus-cz7zc is verified up and running +Dec 20 08:04:39.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-ql4j9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:39.953: INFO: stderr: "" +Dec 20 08:04:39.953: INFO: stdout: "true" +Dec 20 08:04:39.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-ql4j9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:40.104: INFO: stderr: "" +Dec 20 08:04:40.104: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Dec 20 08:04:40.104: INFO: validating pod update-demo-nautilus-ql4j9 +Dec 20 08:04:40.119: INFO: got data: { + "image": "nautilus.jpg" +} + +Dec 20 08:04:40.119: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Dec 20 08:04:40.119: INFO: update-demo-nautilus-ql4j9 is verified up and running +STEP: using delete to clean up resources +Dec 20 08:04:40.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:40.339: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Dec 20 08:04:40.339: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Dec 20 08:04:40.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-j7gwn' +Dec 20 08:04:40.527: INFO: stderr: "No resources found.\n" +Dec 20 08:04:40.527: INFO: stdout: "" +Dec 20 08:04:40.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods -l name=update-demo --namespace=e2e-tests-kubectl-j7gwn -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Dec 20 08:04:40.712: INFO: stderr: "" +Dec 20 08:04:40.712: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:04:40.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-j7gwn" for this suite. +Dec 20 08:04:58.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:04:58.816: INFO: namespace: e2e-tests-kubectl-j7gwn, resource: bindings, ignored listing per whitelist +Dec 20 08:04:58.866: INFO: namespace e2e-tests-kubectl-j7gwn deletion completed in 18.145750621s + +• [SLOW TEST:45.102 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Update Demo + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should scale a replication controller [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-storage] HostPath + should give a volume the correct mode [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] HostPath + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:04:58.866: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename hostpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] HostPath + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 +[It] should give a volume the correct mode [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test hostPath mode +Dec 20 08:04:58.982: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-pzgqv" to be "success or failure" +Dec 20 08:04:58.984: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.586423ms +Dec 20 08:05:00.989: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006842225s +Dec 20 08:05:02.997: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015071012s +STEP: Saw pod success +Dec 20 08:05:02.997: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" +Dec 20 08:05:03.003: INFO: Trying to get logs from node 10-6-155-34 pod pod-host-path-test container test-container-1: +STEP: delete the pod +Dec 20 08:05:03.026: INFO: Waiting for pod pod-host-path-test to disappear +Dec 20 08:05:03.030: INFO: Pod pod-host-path-test no longer exists +[AfterEach] [sig-storage] HostPath + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:05:03.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-hostpath-pzgqv" for this suite. +Dec 20 08:05:09.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:05:09.155: INFO: namespace: e2e-tests-hostpath-pzgqv, resource: bindings, ignored listing per whitelist +Dec 20 08:05:09.205: INFO: namespace e2e-tests-hostpath-pzgqv deletion completed in 6.16408943s + +• [SLOW TEST:10.339 seconds] +[sig-storage] HostPath +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 + should give a volume the correct mode [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:05:09.205: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename namespaces +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a test namespace +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a pod in the namespace +STEP: Waiting for the pod to have running status +STEP: Creating an uninitialized pod in the namespace +Dec 20 08:05:13.409: INFO: error from create uninitialized namespace: +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Verifying there are no pods in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:05:37.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-namespaces-zvj7r" for this suite. +Dec 20 08:05:43.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:05:43.481: INFO: namespace: e2e-tests-namespaces-zvj7r, resource: bindings, ignored listing per whitelist +Dec 20 08:05:43.576: INFO: namespace e2e-tests-namespaces-zvj7r deletion completed in 6.118347477s +STEP: Destroying namespace "e2e-tests-nsdeletetest-bvmxd" for this suite. +Dec 20 08:05:43.580: INFO: Namespace e2e-tests-nsdeletetest-bvmxd was already deleted +STEP: Destroying namespace "e2e-tests-nsdeletetest-npvw7" for this suite. +Dec 20 08:05:49.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:05:49.615: INFO: namespace: e2e-tests-nsdeletetest-npvw7, resource: bindings, ignored listing per whitelist +Dec 20 08:05:49.730: INFO: namespace e2e-tests-nsdeletetest-npvw7 deletion completed in 6.149584747s + +• [SLOW TEST:40.524 seconds] +[sig-api-machinery] Namespaces [Serial] +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:05:49.730: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward API volume plugin +Dec 20 08:05:49.834: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10325769-042e-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-fdrzs" to be "success or failure" +Dec 20 08:05:49.843: INFO: Pod "downwardapi-volume-10325769-042e-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 9.129393ms +Dec 20 08:05:51.847: INFO: Pod "downwardapi-volume-10325769-042e-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013535912s +Dec 20 08:05:53.852: INFO: Pod "downwardapi-volume-10325769-042e-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018131219s +STEP: Saw pod success +Dec 20 08:05:53.852: INFO: Pod "downwardapi-volume-10325769-042e-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:05:53.856: INFO: Trying to get logs from node 10-6-155-34 pod downwardapi-volume-10325769-042e-11e9-b141-0a58ac1c1472 container client-container: +STEP: delete the pod +Dec 20 08:05:53.887: INFO: Waiting for pod downwardapi-volume-10325769-042e-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:05:53.890: INFO: Pod downwardapi-volume-10325769-042e-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:05:53.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-fdrzs" for this suite. +Dec 20 08:05:59.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:05:59.952: INFO: namespace: e2e-tests-projected-fdrzs, resource: bindings, ignored listing per whitelist +Dec 20 08:06:00.053: INFO: namespace e2e-tests-projected-fdrzs deletion completed in 6.151299192s + +• [SLOW TEST:10.323 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-network] Services + should provide secure master service [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-network] Services + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:06:00.053: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 +[It] should provide secure master service [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[AfterEach] [sig-network] Services + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:06:00.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-services-hv79c" for this suite. +Dec 20 08:06:06.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:06:06.279: INFO: namespace: e2e-tests-services-hv79c, resource: bindings, ignored listing per whitelist +Dec 20 08:06:06.337: INFO: namespace e2e-tests-services-hv79c deletion completed in 6.120158613s +[AfterEach] [sig-network] Services + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 + +• [SLOW TEST:6.285 seconds] +[sig-network] Services +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should provide secure master service [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:06:06.338: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating configMap with name projected-configmap-test-volume-map-1a1da10a-042e-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test consume configMaps +Dec 20 08:06:06.479: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1a1e564f-042e-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-mzlpj" to be "success or failure" +Dec 20 08:06:06.484: INFO: Pod "pod-projected-configmaps-1a1e564f-042e-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.473153ms +Dec 20 08:06:08.489: INFO: Pod "pod-projected-configmaps-1a1e564f-042e-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010007715s +Dec 20 08:06:10.493: INFO: Pod "pod-projected-configmaps-1a1e564f-042e-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014212462s +STEP: Saw pod success +Dec 20 08:06:10.493: INFO: Pod "pod-projected-configmaps-1a1e564f-042e-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:06:10.502: INFO: Trying to get logs from node 10-6-155-34 pod pod-projected-configmaps-1a1e564f-042e-11e9-b141-0a58ac1c1472 container projected-configmap-volume-test: +STEP: delete the pod +Dec 20 08:06:10.536: INFO: Waiting for pod pod-projected-configmaps-1a1e564f-042e-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:06:10.542: INFO: Pod pod-projected-configmaps-1a1e564f-042e-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:06:10.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-mzlpj" for this suite. +Dec 20 08:06:16.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:06:16.728: INFO: namespace: e2e-tests-projected-mzlpj, resource: bindings, ignored listing per whitelist +Dec 20 08:06:16.732: INFO: namespace e2e-tests-projected-mzlpj deletion completed in 6.180234355s + +• [SLOW TEST:10.395 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-storage] Downward API volume + should set DefaultMode on files [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:06:16.732: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should set DefaultMode on files [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward API volume plugin +Dec 20 08:06:16.882: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2050fd59-042e-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-downward-api-s7f5m" to be "success or failure" +Dec 20 08:06:16.885: INFO: Pod "downwardapi-volume-2050fd59-042e-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 3.784628ms +Dec 20 08:06:20.903: INFO: Pod "downwardapi-volume-2050fd59-042e-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021123083s +STEP: Saw pod success +Dec 20 08:06:20.903: INFO: Pod "downwardapi-volume-2050fd59-042e-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:06:20.907: INFO: Trying to get logs from node 10-6-155-34 pod downwardapi-volume-2050fd59-042e-11e9-b141-0a58ac1c1472 container client-container: +STEP: delete the pod +Dec 20 08:06:20.946: INFO: Waiting for pod downwardapi-volume-2050fd59-042e-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:06:20.951: INFO: Pod downwardapi-volume-2050fd59-042e-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:06:20.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-downward-api-s7f5m" for this suite. +Dec 20 08:06:26.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:06:27.045: INFO: namespace: e2e-tests-downward-api-s7f5m, resource: bindings, ignored listing per whitelist +Dec 20 08:06:27.108: INFO: namespace e2e-tests-downward-api-s7f5m deletion completed in 6.146536887s + +• [SLOW TEST:10.376 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should set DefaultMode on files [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:06:27.108: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 +STEP: Creating service test in namespace e2e-tests-statefulset-5fk98 +[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Initializing watcher for selector baz=blah,foo=bar +STEP: Creating stateful set ss in namespace e2e-tests-statefulset-5fk98 +STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-5fk98 +Dec 20 08:06:27.265: INFO: Found 0 stateful pods, waiting for 1 +Dec 20 08:06:37.270: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod +Dec 20 08:06:37.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-5fk98 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Dec 20 08:06:37.624: INFO: stderr: "" +Dec 20 08:06:37.624: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Dec 20 08:06:37.624: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Dec 20 08:06:37.629: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Dec 20 08:06:47.634: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Dec 20 08:06:47.634: INFO: Waiting for statefulset status.replicas updated to 0 +Dec 20 08:06:47.656: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999145s +Dec 20 08:06:48.674: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.988426488s +Dec 20 08:06:49.679: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.970222917s +Dec 20 08:06:50.683: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.965414422s +Dec 20 08:06:51.689: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.960529872s +Dec 20 08:06:56.721: INFO: Verifying statefulset ss doesn't scale past 1 for another 928.032691ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-5fk98 +Dec 20 08:06:57.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-5fk98 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:06:57.973: INFO: stderr: "" +Dec 20 08:06:57.973: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Dec 20 08:06:57.973: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Dec 20 08:06:57.977: INFO: Found 1 stateful pods, waiting for 3 +Dec 20 08:07:07.983: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Dec 20 08:07:07.983: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Dec 20 08:07:07.983: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Verifying that stateful set ss was scaled up in order +STEP: Scale down will halt with unhealthy stateful pod +Dec 20 08:07:07.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-5fk98 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Dec 20 08:07:08.290: INFO: stderr: "" +Dec 20 08:07:08.290: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Dec 20 08:07:08.290: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Dec 20 08:07:08.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-5fk98 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Dec 20 08:07:08.636: INFO: stderr: "" +Dec 20 08:07:08.636: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Dec 20 08:07:08.636: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Dec 20 08:07:08.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-5fk98 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Dec 20 08:07:09.039: INFO: stderr: "" +Dec 20 08:07:09.039: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Dec 20 08:07:09.039: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Dec 20 08:07:09.039: INFO: Waiting for statefulset status.replicas updated to 0 +Dec 20 08:07:09.045: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 +Dec 20 08:07:19.062: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Dec 20 08:07:19.062: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Dec 20 08:07:19.062: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Dec 20 08:07:19.082: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999233s +Dec 20 08:07:20.091: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992968522s +Dec 20 08:07:21.099: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.984586132s +Dec 20 08:07:26.136: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.947063585s +Dec 20 08:07:27.143: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.939567037s +Dec 20 08:07:28.149: INFO: Verifying statefulset ss doesn't scale past 3 for another 932.327947ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-5fk98 +Dec 20 08:07:29.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-5fk98 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:07:29.391: INFO: stderr: "" +Dec 20 08:07:29.391: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Dec 20 08:07:29.391: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Dec 20 08:07:29.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-5fk98 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:07:29.653: INFO: stderr: "" +Dec 20 08:07:29.653: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Dec 20 08:07:29.653: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Dec 20 08:07:29.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-5fk98 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:07:29.929: INFO: stderr: "" +Dec 20 08:07:29.929: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Dec 20 08:07:29.929: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Dec 20 08:07:29.929: INFO: Scaling statefulset ss to 0 +STEP: Verifying that stateful set ss was scaled down in reverse order +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 +Dec 20 08:07:49.947: INFO: Deleting all statefulset in ns e2e-tests-statefulset-5fk98 +Dec 20 08:07:49.950: INFO: Scaling statefulset ss to 0 +Dec 20 08:07:49.960: INFO: Waiting for statefulset status.replicas updated to 0 +Dec 20 08:07:49.964: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:07:49.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-statefulset-5fk98" for this suite. +Dec 20 08:07:56.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:07:56.031: INFO: namespace: e2e-tests-statefulset-5fk98, resource: bindings, ignored listing per whitelist +Dec 20 08:07:56.137: INFO: namespace e2e-tests-statefulset-5fk98 deletion completed in 6.153703019s + +• [SLOW TEST:89.029 seconds] +[sig-apps] StatefulSet +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] [sig-node] Events + should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] [sig-node] Events + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:07:56.137: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename events +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: retrieving the pod +Dec 20 08:08:00.279: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-5b8dda71-042e-11e9-b141-0a58ac1c1472,GenerateName:,Namespace:e2e-tests-events-9cjmf,SelfLink:/api/v1/namespaces/e2e-tests-events-9cjmf/pods/send-events-5b8dda71-042e-11e9-b141-0a58ac1c1472,UID:5b8d33dd-042e-11e9-b07b-0242ac120004,ResourceVersion:959930,Generation:0,CreationTimestamp:2018-12-20 08:07:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 254074395,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-sngtm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sngtm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-sngtm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10-6-155-34,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001352290} {node.kubernetes.io/unreachable Exists NoExecute 0xc0013522b0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:07:56 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:07:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:07:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:07:56 +0000 UTC }],Message:,Reason:,HostIP:10.6.155.34,PodIP:172.28.20.104,StartTime:2018-12-20 08:07:56 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2018-12-20 08:07:59 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://b4dbc19eb8491675ab1f59d3f8d57345dfa7dcc9739fd22f1b4bbfa7896cc82a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} + +STEP: checking for scheduler event about the pod +Dec 20 08:08:02.284: INFO: Saw scheduler event for our pod. +STEP: checking for kubelet event about the pod +Dec 20 08:08:04.303: INFO: Saw kubelet event for our pod. +STEP: deleting the pod +[AfterEach] [k8s.io] [sig-node] Events + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:08:04.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-events-9cjmf" for this suite. +Dec 20 08:08:42.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:08:42.470: INFO: namespace: e2e-tests-events-9cjmf, resource: bindings, ignored listing per whitelist +Dec 20 08:08:42.503: INFO: namespace e2e-tests-events-9cjmf deletion completed in 38.186742772s + +• [SLOW TEST:46.365 seconds] +[k8s.io] [sig-node] Events +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update annotations on modification [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:08:42.503: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating the pod +Dec 20 08:08:47.199: INFO: Successfully updated pod "annotationupdate77335309-042e-11e9-b141-0a58ac1c1472" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:08:51.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-f825t" for this suite. +Dec 20 08:09:13.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:09:13.375: INFO: namespace: e2e-tests-projected-f825t, resource: bindings, ignored listing per whitelist +Dec 20 08:09:13.391: INFO: namespace e2e-tests-projected-f825t deletion completed in 22.146993685s + +• [SLOW TEST:30.889 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should update annotations on modification [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Subpath + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:09:13.392: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating pod pod-subpath-test-configmap-bjkl +STEP: Creating a pod to test atomic-volume-subpath +Dec 20 08:09:13.537: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bjkl" in namespace "e2e-tests-subpath-cxhgp" to be "success or failure" +Dec 20 08:09:13.540: INFO: Pod "pod-subpath-test-configmap-bjkl": Phase="Pending", Reason="", readiness=false. Elapsed: 3.413816ms +Dec 20 08:09:19.556: INFO: Pod "pod-subpath-test-configmap-bjkl": Phase="Running", Reason="", readiness=false. Elapsed: 6.019295847s +Dec 20 08:09:21.562: INFO: Pod "pod-subpath-test-configmap-bjkl": Phase="Running", Reason="", readiness=false. Elapsed: 8.024774987s +Dec 20 08:09:27.581: INFO: Pod "pod-subpath-test-configmap-bjkl": Phase="Running", Reason="", readiness=false. Elapsed: 14.044044044s +Dec 20 08:09:29.586: INFO: Pod "pod-subpath-test-configmap-bjkl": Phase="Running", Reason="", readiness=false. Elapsed: 16.04908092s +Dec 20 08:09:31.592: INFO: Pod "pod-subpath-test-configmap-bjkl": Phase="Running", Reason="", readiness=false. Elapsed: 18.054789735s +Dec 20 08:09:37.608: INFO: Pod "pod-subpath-test-configmap-bjkl": Phase="Running", Reason="", readiness=false. Elapsed: 24.071192416s +Dec 20 08:09:39.617: INFO: Pod "pod-subpath-test-configmap-bjkl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.080424432s +STEP: Saw pod success +Dec 20 08:09:39.617: INFO: Pod "pod-subpath-test-configmap-bjkl" satisfied condition "success or failure" +Dec 20 08:09:39.624: INFO: Trying to get logs from node 10-6-155-34 pod pod-subpath-test-configmap-bjkl container test-container-subpath-configmap-bjkl: +STEP: delete the pod +Dec 20 08:09:39.648: INFO: Waiting for pod pod-subpath-test-configmap-bjkl to disappear +Dec 20 08:09:39.653: INFO: Pod pod-subpath-test-configmap-bjkl no longer exists +STEP: Deleting pod pod-subpath-test-configmap-bjkl +Dec 20 08:09:39.653: INFO: Deleting pod "pod-subpath-test-configmap-bjkl" in namespace "e2e-tests-subpath-cxhgp" +[AfterEach] [sig-storage] Subpath + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:09:39.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-subpath-cxhgp" for this suite. +Dec 20 08:09:45.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:09:45.789: INFO: namespace: e2e-tests-subpath-cxhgp, resource: bindings, ignored listing per whitelist +Dec 20 08:09:45.821: INFO: namespace e2e-tests-subpath-cxhgp deletion completed in 6.155784853s + +• [SLOW TEST:32.429 seconds] +[sig-storage] Subpath +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + Atomic writer volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with configmap pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SS +------------------------------ +[k8s.io] Probing container + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:09:45.821: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 +[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-znf5d +Dec 20 08:09:49.939: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-znf5d +STEP: checking the pod's current state and verifying that restartCount is present +Dec 20 08:09:49.942: INFO: Initial restart count of pod liveness-http is 0 +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:13:50.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-container-probe-znf5d" for this suite. +Dec 20 08:13:56.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:13:56.764: INFO: namespace: e2e-tests-container-probe-znf5d, resource: bindings, ignored listing per whitelist +Dec 20 08:13:56.908: INFO: namespace e2e-tests-container-probe-znf5d deletion completed in 6.20415656s + +• [SLOW TEST:251.087 seconds] +[k8s.io] Probing container +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Docker Containers + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Docker Containers + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:13:56.909: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test override all +Dec 20 08:13:57.060: INFO: Waiting up to 5m0s for pod "client-containers-329b8fc1-042f-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-containers-5hssf" to be "success or failure" +Dec 20 08:13:57.064: INFO: Pod "client-containers-329b8fc1-042f-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 3.963782ms +Dec 20 08:14:03.962: INFO: Pod "client-containers-329b8fc1-042f-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.902075008s +STEP: Saw pod success +Dec 20 08:14:03.962: INFO: Pod "client-containers-329b8fc1-042f-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:14:03.967: INFO: Trying to get logs from node 10-6-155-34 pod client-containers-329b8fc1-042f-11e9-b141-0a58ac1c1472 container test-container: +STEP: delete the pod +Dec 20 08:14:04.091: INFO: Waiting for pod client-containers-329b8fc1-042f-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:14:04.096: INFO: Pod client-containers-329b8fc1-042f-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:14:04.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-containers-5hssf" for this suite. +Dec 20 08:14:10.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:14:10.184: INFO: namespace: e2e-tests-containers-5hssf, resource: bindings, ignored listing per whitelist +Dec 20 08:14:10.375: INFO: namespace e2e-tests-containers-5hssf deletion completed in 6.269923243s + +• [SLOW TEST:13.466 seconds] +[k8s.io] Docker Containers +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSS +------------------------------ +[k8s.io] KubeletManagedEtcHosts + should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] KubeletManagedEtcHosts + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:14:10.376: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Setting up the test +STEP: Creating hostNetwork=false pod +STEP: Creating hostNetwork=true pod +STEP: Running the test +STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false +Dec 20 08:14:20.588: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-25j6f PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 20 08:14:20.589: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +Dec 20 08:14:20.740: INFO: Exec stderr: "" +Dec 20 08:14:20.740: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-25j6f PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 20 08:14:20.740: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +Dec 20 08:14:20.856: INFO: Exec stderr: "" +Dec 20 08:14:20.856: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-25j6f PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 20 08:14:20.856: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +Dec 20 08:14:20.965: INFO: Exec stderr: "" +Dec 20 08:14:20.966: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-25j6f PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 20 08:14:20.966: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +Dec 20 08:14:21.130: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount +Dec 20 08:14:21.130: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-25j6f PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 20 08:14:21.130: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +Dec 20 08:14:21.337: INFO: Exec stderr: "" +Dec 20 08:14:21.337: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-25j6f PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 20 08:14:21.337: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +Dec 20 08:14:21.500: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true +Dec 20 08:14:21.500: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-25j6f PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 20 08:14:21.500: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +Dec 20 08:14:21.671: INFO: Exec stderr: "" +Dec 20 08:14:21.671: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-25j6f PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 20 08:14:21.672: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +Dec 20 08:14:21.824: INFO: Exec stderr: "" +Dec 20 08:14:21.824: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-25j6f PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 20 08:14:21.824: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +Dec 20 08:14:21.987: INFO: Exec stderr: "" +Dec 20 08:14:21.987: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-25j6f PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 20 08:14:21.987: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +Dec 20 08:14:22.176: INFO: Exec stderr: "" +[AfterEach] [k8s.io] KubeletManagedEtcHosts + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:14:22.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-25j6f" for this suite. +Dec 20 08:15:12.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:15:12.383: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-25j6f, resource: bindings, ignored listing per whitelist +Dec 20 08:15:12.386: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-25j6f deletion completed in 50.172377825s + +• [SLOW TEST:62.010 seconds] +[k8s.io] KubeletManagedEtcHosts +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should serve a basic endpoint from pods [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-network] Services + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:15:12.386: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 +[It] should serve a basic endpoint from pods [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating service endpoint-test2 in namespace e2e-tests-services-dhltp +STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-dhltp to expose endpoints map[] +Dec 20 08:15:12.565: INFO: Get endpoints failed (3.948236ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found +Dec 20 08:15:13.571: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-dhltp exposes endpoints map[] (1.009677606s elapsed) +STEP: Creating pod pod1 in namespace e2e-tests-services-dhltp +STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-dhltp to expose endpoints map[pod1:[80]] +Dec 20 08:15:17.643: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-dhltp exposes endpoints map[pod1:[80]] (4.056493399s elapsed) +STEP: Creating pod pod2 in namespace e2e-tests-services-dhltp +STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-dhltp to expose endpoints map[pod1:[80] pod2:[80]] +Dec 20 08:15:21.740: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-dhltp exposes endpoints map[pod1:[80] pod2:[80]] (4.092146544s elapsed) +STEP: Deleting pod pod1 in namespace e2e-tests-services-dhltp +STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-dhltp to expose endpoints map[pod2:[80]] +Dec 20 08:15:22.766: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-dhltp exposes endpoints map[pod2:[80]] (1.018443143s elapsed) +STEP: Deleting pod pod2 in namespace e2e-tests-services-dhltp +STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-dhltp to expose endpoints map[] +Dec 20 08:15:22.782: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-dhltp exposes endpoints map[] (8.36385ms elapsed) +[AfterEach] [sig-network] Services + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:15:22.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-services-dhltp" for this suite. +Dec 20 08:15:44.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:15:44.865: INFO: namespace: e2e-tests-services-dhltp, resource: bindings, ignored listing per whitelist +Dec 20 08:15:44.973: INFO: namespace e2e-tests-services-dhltp deletion completed in 22.151454305s +[AfterEach] [sig-network] Services + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 + +• [SLOW TEST:32.588 seconds] +[sig-network] Services +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should serve a basic endpoint from pods [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-node] Downward API + should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-node] Downward API + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:15:44.974: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward api env vars +Dec 20 08:15:45.089: INFO: Waiting up to 5m0s for pod "downward-api-72ff2464-042f-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-downward-api-prvm2" to be "success or failure" +Dec 20 08:15:45.093: INFO: Pod "downward-api-72ff2464-042f-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093131ms +Dec 20 08:15:49.105: INFO: Pod "downward-api-72ff2464-042f-11e9-b141-0a58ac1c1472": Phase="Running", Reason="", readiness=true. Elapsed: 4.015612132s +Dec 20 08:15:51.113: INFO: Pod "downward-api-72ff2464-042f-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024228417s +STEP: Saw pod success +Dec 20 08:15:51.114: INFO: Pod "downward-api-72ff2464-042f-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:15:51.118: INFO: Trying to get logs from node 10-6-155-34 pod downward-api-72ff2464-042f-11e9-b141-0a58ac1c1472 container dapi-container: +STEP: delete the pod +Dec 20 08:15:51.152: INFO: Waiting for pod downward-api-72ff2464-042f-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:15:51.161: INFO: Pod downward-api-72ff2464-042f-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:15:51.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-downward-api-prvm2" for this suite. +Dec 20 08:15:57.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:15:57.304: INFO: namespace: e2e-tests-downward-api-prvm2, resource: bindings, ignored listing per whitelist +Dec 20 08:15:57.337: INFO: namespace e2e-tests-downward-api-prvm2 deletion completed in 6.166239867s + +• [SLOW TEST:12.363 seconds] +[sig-node] Downward API +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 + should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:15:57.337: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating configMap with name projected-configmap-test-volume-7a5e8251-042f-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test consume configMaps +Dec 20 08:15:57.464: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7a5f3b03-042f-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-wp5fj" to be "success or failure" +Dec 20 08:15:59.474: INFO: Pod "pod-projected-configmaps-7a5f3b03-042f-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009671777s +Dec 20 08:16:01.484: INFO: Pod "pod-projected-configmaps-7a5f3b03-042f-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019565284s +STEP: Saw pod success +Dec 20 08:16:01.484: INFO: Pod "pod-projected-configmaps-7a5f3b03-042f-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:16:01.488: INFO: Trying to get logs from node 10-6-155-34 pod pod-projected-configmaps-7a5f3b03-042f-11e9-b141-0a58ac1c1472 container projected-configmap-volume-test: +STEP: delete the pod +Dec 20 08:16:01.531: INFO: Waiting for pod pod-projected-configmaps-7a5f3b03-042f-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:16:01.537: INFO: Pod pod-projected-configmaps-7a5f3b03-042f-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:16:01.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-wp5fj" for this suite. +Dec 20 08:16:07.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:16:07.656: INFO: namespace: e2e-tests-projected-wp5fj, resource: bindings, ignored listing per whitelist +Dec 20 08:16:07.737: INFO: namespace e2e-tests-projected-wp5fj deletion completed in 6.192474377s + +• [SLOW TEST:10.400 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSS +------------------------------ +[k8s.io] Probing container + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:16:07.737: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 +[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-lcbkl +Dec 20 08:16:11.960: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-lcbkl +STEP: checking the pod's current state and verifying that restartCount is present +Dec 20 08:16:11.963: INFO: Initial restart count of pod liveness-http is 0 +Dec 20 08:16:34.026: INFO: Restart count of pod e2e-tests-container-probe-lcbkl/liveness-http is now 1 (22.063333579s elapsed) +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:16:34.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-container-probe-lcbkl" for this suite. +Dec 20 08:16:40.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:16:40.151: INFO: namespace: e2e-tests-container-probe-lcbkl, resource: bindings, ignored listing per whitelist +Dec 20 08:16:40.232: INFO: namespace e2e-tests-container-probe-lcbkl deletion completed in 6.188574357s + +• [SLOW TEST:32.495 seconds] +[k8s.io] Probing container +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on default medium should have the correct mode [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:16:40.232: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on default medium should have the correct mode [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test emptydir volume type on node default medium +Dec 20 08:16:40.360: INFO: Waiting up to 5m0s for pod "pod-93f13d49-042f-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-emptydir-jm5x8" to be "success or failure" +Dec 20 08:16:40.364: INFO: Pod "pod-93f13d49-042f-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 3.971988ms +Dec 20 08:16:42.369: INFO: Pod "pod-93f13d49-042f-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00814153s +Dec 20 08:16:44.374: INFO: Pod "pod-93f13d49-042f-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013828231s +STEP: Saw pod success +Dec 20 08:16:44.374: INFO: Pod "pod-93f13d49-042f-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:16:44.378: INFO: Trying to get logs from node 10-6-155-34 pod pod-93f13d49-042f-11e9-b141-0a58ac1c1472 container test-container: +STEP: delete the pod +Dec 20 08:16:44.406: INFO: Waiting for pod pod-93f13d49-042f-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:16:44.410: INFO: Pod pod-93f13d49-042f-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:16:44.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-emptydir-jm5x8" for this suite. +Dec 20 08:16:50.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:16:50.489: INFO: namespace: e2e-tests-emptydir-jm5x8, resource: bindings, ignored listing per whitelist +Dec 20 08:16:50.563: INFO: namespace e2e-tests-emptydir-jm5x8 deletion completed in 6.145965208s + +• [SLOW TEST:10.331 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 + volume on default medium should have the correct mode [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSS +------------------------------ +[sig-storage] Projected configMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:16:50.563: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating projection with configMap that has name projected-configmap-test-upd-9a19affa-042f-11e9-b141-0a58ac1c1472 +STEP: Creating the pod +STEP: Updating configmap projected-configmap-test-upd-9a19affa-042f-11e9-b141-0a58ac1c1472 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:18:09.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-87f8l" for this suite. +Dec 20 08:18:31.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:18:31.370: INFO: namespace: e2e-tests-projected-87f8l, resource: bindings, ignored listing per whitelist +Dec 20 08:18:31.526: INFO: namespace e2e-tests-projected-87f8l deletion completed in 22.198709965s + +• [SLOW TEST:100.963 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: udp [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-network] Networking + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:18:31.527: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: udp [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-27ddd +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Dec 20 08:18:31.654: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +STEP: Creating test pods +Dec 20 08:18:55.752: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 172.28.240.85 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-27ddd PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 20 08:18:55.752: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +Dec 20 08:18:56.899: INFO: Found all expected endpoints: [netserver-0] +Dec 20 08:18:56.906: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 172.28.20.118 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-27ddd PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Dec 20 08:18:56.906: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +Dec 20 08:18:58.053: INFO: Found all expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:18:58.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-pod-network-test-27ddd" for this suite. +Dec 20 08:19:20.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:19:20.162: INFO: namespace: e2e-tests-pod-network-test-27ddd, resource: bindings, ignored listing per whitelist +Dec 20 08:19:20.299: INFO: namespace e2e-tests-pod-network-test-27ddd deletion completed in 22.23854046s + +• [SLOW TEST:48.773 seconds] +[sig-network] Networking +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 + Granular Checks: Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 + should function for node-pod communication: udp [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Probing container + should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:19:20.300: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 +[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-78vv4 +Dec 20 08:19:24.498: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-78vv4 +STEP: checking the pod's current state and verifying that restartCount is present +Dec 20 08:19:24.512: INFO: Initial restart count of pod liveness-http is 0 +Dec 20 08:19:46.584: INFO: Restart count of pod e2e-tests-container-probe-78vv4/liveness-http is now 1 (22.072236036s elapsed) +Dec 20 08:20:06.647: INFO: Restart count of pod e2e-tests-container-probe-78vv4/liveness-http is now 2 (42.135273001s elapsed) +Dec 20 08:20:26.709: INFO: Restart count of pod e2e-tests-container-probe-78vv4/liveness-http is now 3 (1m2.197116109s elapsed) +Dec 20 08:20:46.765: INFO: Restart count of pod e2e-tests-container-probe-78vv4/liveness-http is now 4 (1m22.252962767s elapsed) +Dec 20 08:21:54.980: INFO: Restart count of pod e2e-tests-container-probe-78vv4/liveness-http is now 5 (2m30.46830004s elapsed) +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:21:54.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-container-probe-78vv4" for this suite. +Dec 20 08:22:01.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:22:01.196: INFO: namespace: e2e-tests-container-probe-78vv4, resource: bindings, ignored listing per whitelist +Dec 20 08:22:01.209: INFO: namespace e2e-tests-container-probe-78vv4 deletion completed in 6.211440257s + +• [SLOW TEST:160.910 seconds] +[k8s.io] Probing container +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-storage] Downward API volume + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:22:01.210: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward API volume plugin +Dec 20 08:22:01.398: INFO: Waiting up to 5m0s for pod "downwardapi-volume-534a5fab-0430-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-downward-api-8zvw9" to be "success or failure" +Dec 20 08:22:01.403: INFO: Pod "downwardapi-volume-534a5fab-0430-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 5.677707ms +Dec 20 08:22:03.409: INFO: Pod "downwardapi-volume-534a5fab-0430-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011438979s +Dec 20 08:22:05.422: INFO: Pod "downwardapi-volume-534a5fab-0430-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024348333s +STEP: Saw pod success +Dec 20 08:22:05.422: INFO: Pod "downwardapi-volume-534a5fab-0430-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:22:05.428: INFO: Trying to get logs from node 10-6-155-34 pod downwardapi-volume-534a5fab-0430-11e9-b141-0a58ac1c1472 container client-container: +STEP: delete the pod +Dec 20 08:22:05.458: INFO: Waiting for pod downwardapi-volume-534a5fab-0430-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:22:05.465: INFO: Pod downwardapi-volume-534a5fab-0430-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:22:05.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-downward-api-8zvw9" for this suite. +Dec 20 08:22:11.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:22:11.568: INFO: namespace: e2e-tests-downward-api-8zvw9, resource: bindings, ignored listing per whitelist +Dec 20 08:22:11.730: INFO: namespace e2e-tests-downward-api-8zvw9 deletion completed in 6.2582357s + +• [SLOW TEST:10.521 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job + should create a job from an image, then delete the job [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:22:11.731: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[It] should create a job from an image, then delete the job [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: executing a command with run --rm and attach with stdin +Dec 20 08:22:11.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 --namespace=e2e-tests-kubectl-jfrpz run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' +Dec 20 08:22:16.437: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n" +Dec 20 08:22:16.437: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" +STEP: verifying the job e2e-test-rm-busybox-job was deleted +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:22:18.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-jfrpz" for this suite. +Dec 20 08:22:32.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:22:32.581: INFO: namespace: e2e-tests-kubectl-jfrpz, resource: bindings, ignored listing per whitelist +Dec 20 08:22:32.631: INFO: namespace e2e-tests-kubectl-jfrpz deletion completed in 14.180881413s + +• [SLOW TEST:20.900 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Kubectl run --rm job + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should create a job from an image, then delete the job [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:22:32.632: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating the pod +Dec 20 08:22:37.387: INFO: Successfully updated pod "labelsupdate660611f1-0430-11e9-b141-0a58ac1c1472" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:22:39.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-2tzq7" for this suite. +Dec 20 08:23:01.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:23:01.661: INFO: namespace: e2e-tests-projected-2tzq7, resource: bindings, ignored listing per whitelist +Dec 20 08:23:01.661: INFO: namespace e2e-tests-projected-2tzq7 deletion completed in 22.209527954s + +• [SLOW TEST:29.029 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSS +------------------------------ +[sig-storage] Secrets + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:23:01.661: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating secret with name secret-test-774fc2fc-0430-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test consume secrets +Dec 20 08:23:01.837: INFO: Waiting up to 5m0s for pod "pod-secrets-77516bc1-0430-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-secrets-vs9vr" to be "success or failure" +Dec 20 08:23:01.843: INFO: Pod "pod-secrets-77516bc1-0430-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 5.402377ms +Dec 20 08:23:03.850: INFO: Pod "pod-secrets-77516bc1-0430-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012486386s +Dec 20 08:23:05.856: INFO: Pod "pod-secrets-77516bc1-0430-11e9-b141-0a58ac1c1472": Phase="Running", Reason="", readiness=true. Elapsed: 4.018785481s +Dec 20 08:23:07.861: INFO: Pod "pod-secrets-77516bc1-0430-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023411476s +STEP: Saw pod success +Dec 20 08:23:07.861: INFO: Pod "pod-secrets-77516bc1-0430-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:23:07.869: INFO: Trying to get logs from node 10-6-155-34 pod pod-secrets-77516bc1-0430-11e9-b141-0a58ac1c1472 container secret-volume-test: +STEP: delete the pod +Dec 20 08:23:07.890: INFO: Waiting for pod pod-secrets-77516bc1-0430-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:23:07.897: INFO: Pod pod-secrets-77516bc1-0430-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:23:07.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-secrets-vs9vr" for this suite. +Dec 20 08:23:13.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:23:14.034: INFO: namespace: e2e-tests-secrets-vs9vr, resource: bindings, ignored listing per whitelist +Dec 20 08:23:14.127: INFO: namespace e2e-tests-secrets-vs9vr deletion completed in 6.213948038s + +• [SLOW TEST:12.466 seconds] +[sig-storage] Secrets +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSS +------------------------------ +[k8s.io] [sig-node] PreStop + should call prestop when killing a pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] [sig-node] PreStop + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:23:14.127: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename prestop +STEP: Waiting for a default service account to be provisioned in namespace +[It] should call prestop when killing a pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating server pod server in namespace e2e-tests-prestop-tc88x +STEP: Waiting for pods to come up. +STEP: Creating tester pod tester in namespace e2e-tests-prestop-tc88x +STEP: Deleting pre-stop pod +Dec 20 08:23:27.364: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true +} +STEP: Deleting the server pod +[AfterEach] [k8s.io] [sig-node] PreStop + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:23:27.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-prestop-tc88x" for this suite. +Dec 20 08:24:05.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:24:05.536: INFO: namespace: e2e-tests-prestop-tc88x, resource: bindings, ignored listing per whitelist +Dec 20 08:24:05.601: INFO: namespace e2e-tests-prestop-tc88x deletion completed in 38.220997494s + +• [SLOW TEST:51.474 seconds] +[k8s.io] [sig-node] PreStop +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should call prestop when killing a pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:24:05.602: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating configMap with name projected-configmap-test-volume-9d715d45-0430-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test consume configMaps +Dec 20 08:24:05.812: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9d732cbe-0430-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-lsr66" to be "success or failure" +Dec 20 08:24:05.816: INFO: Pod "pod-projected-configmaps-9d732cbe-0430-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 3.423912ms +Dec 20 08:24:07.822: INFO: Pod "pod-projected-configmaps-9d732cbe-0430-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009415033s +Dec 20 08:24:09.826: INFO: Pod "pod-projected-configmaps-9d732cbe-0430-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013816571s +Dec 20 08:24:11.833: INFO: Pod "pod-projected-configmaps-9d732cbe-0430-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02031144s +STEP: Saw pod success +Dec 20 08:24:11.833: INFO: Pod "pod-projected-configmaps-9d732cbe-0430-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:24:11.837: INFO: Trying to get logs from node 10-6-155-34 pod pod-projected-configmaps-9d732cbe-0430-11e9-b141-0a58ac1c1472 container projected-configmap-volume-test: +STEP: delete the pod +Dec 20 08:24:11.860: INFO: Waiting for pod pod-projected-configmaps-9d732cbe-0430-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:24:11.864: INFO: Pod pod-projected-configmaps-9d732cbe-0430-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:24:11.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-lsr66" for this suite. +Dec 20 08:24:17.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:24:18.037: INFO: namespace: e2e-tests-projected-lsr66, resource: bindings, ignored listing per whitelist +Dec 20 08:24:18.047: INFO: namespace e2e-tests-projected-lsr66 deletion completed in 6.1772852s + +• [SLOW TEST:12.446 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Update Demo + should do a rolling update of a replication controller [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:24:18.048: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[BeforeEach] [k8s.io] Update Demo + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 +[It] should do a rolling update of a replication controller [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating the initial replication controller +Dec 20 08:24:18.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 create -f - --namespace=e2e-tests-kubectl-zhp4c' +Dec 20 08:24:22.429: INFO: stderr: "" +Dec 20 08:24:22.429: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Dec 20 08:24:22.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-zhp4c' +Dec 20 08:24:22.725: INFO: stderr: "" +Dec 20 08:24:22.725: INFO: stdout: "update-demo-nautilus-m954q update-demo-nautilus-vgmnf " +Dec 20 08:24:22.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-m954q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zhp4c' +Dec 20 08:24:22.950: INFO: stderr: "" +Dec 20 08:24:22.950: INFO: stdout: "" +Dec 20 08:24:22.950: INFO: update-demo-nautilus-m954q is created but not running +Dec 20 08:24:27.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-zhp4c' +Dec 20 08:24:28.241: INFO: stderr: "" +Dec 20 08:24:28.241: INFO: stdout: "update-demo-nautilus-m954q update-demo-nautilus-vgmnf " +Dec 20 08:24:28.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-m954q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zhp4c' +Dec 20 08:24:28.411: INFO: stderr: "" +Dec 20 08:24:28.411: INFO: stdout: "true" +Dec 20 08:24:28.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-m954q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zhp4c' +Dec 20 08:24:28.598: INFO: stderr: "" +Dec 20 08:24:28.598: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Dec 20 08:24:28.598: INFO: validating pod update-demo-nautilus-m954q +Dec 20 08:24:58.605: INFO: update-demo-nautilus-m954q is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-m954q) +Dec 20 08:25:03.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-zhp4c' +Dec 20 08:25:03.783: INFO: stderr: "" +Dec 20 08:25:03.783: INFO: stdout: "update-demo-nautilus-m954q update-demo-nautilus-vgmnf " +Dec 20 08:25:03.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-m954q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zhp4c' +Dec 20 08:25:03.923: INFO: stderr: "" +Dec 20 08:25:03.923: INFO: stdout: "true" +Dec 20 08:25:03.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-m954q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zhp4c' +Dec 20 08:25:04.093: INFO: stderr: "" +Dec 20 08:25:04.093: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Dec 20 08:25:04.093: INFO: validating pod update-demo-nautilus-m954q +Dec 20 08:25:04.155: INFO: got data: { + "image": "nautilus.jpg" +} + +Dec 20 08:25:04.156: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Dec 20 08:25:04.156: INFO: update-demo-nautilus-m954q is verified up and running +Dec 20 08:25:04.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-vgmnf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zhp4c' +Dec 20 08:25:04.351: INFO: stderr: "" +Dec 20 08:25:04.351: INFO: stdout: "true" +Dec 20 08:25:04.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-nautilus-vgmnf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zhp4c' +Dec 20 08:25:04.511: INFO: stderr: "" +Dec 20 08:25:04.511: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Dec 20 08:25:04.511: INFO: validating pod update-demo-nautilus-vgmnf +Dec 20 08:25:04.535: INFO: got data: { + "image": "nautilus.jpg" +} + +Dec 20 08:25:04.535: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Dec 20 08:25:04.535: INFO: update-demo-nautilus-vgmnf is verified up and running +STEP: rolling-update to new replication controller +Dec 20 08:25:04.537: INFO: scanned /root for discovery docs: +Dec 20 08:25:04.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-zhp4c' +Dec 20 08:25:28.254: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" +Dec 20 08:25:28.254: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Dec 20 08:25:28.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-zhp4c' +Dec 20 08:25:28.424: INFO: stderr: "" +Dec 20 08:25:28.424: INFO: stdout: "update-demo-kitten-qzjmb update-demo-kitten-z9f2s " +Dec 20 08:25:28.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-kitten-qzjmb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zhp4c' +Dec 20 08:25:28.564: INFO: stderr: "" +Dec 20 08:25:28.564: INFO: stdout: "true" +Dec 20 08:25:28.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-kitten-qzjmb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zhp4c' +Dec 20 08:25:28.723: INFO: stderr: "" +Dec 20 08:25:28.723: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" +Dec 20 08:25:28.723: INFO: validating pod update-demo-kitten-qzjmb +Dec 20 08:25:28.763: INFO: got data: { + "image": "kitten.jpg" +} + +Dec 20 08:25:28.763: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . +Dec 20 08:25:28.763: INFO: update-demo-kitten-qzjmb is verified up and running +Dec 20 08:25:28.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-kitten-z9f2s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zhp4c' +Dec 20 08:25:28.896: INFO: stderr: "" +Dec 20 08:25:28.896: INFO: stdout: "true" +Dec 20 08:25:28.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods update-demo-kitten-z9f2s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zhp4c' +Dec 20 08:25:29.045: INFO: stderr: "" +Dec 20 08:25:29.045: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" +Dec 20 08:25:29.045: INFO: validating pod update-demo-kitten-z9f2s +Dec 20 08:25:29.097: INFO: got data: { + "image": "kitten.jpg" +} + +Dec 20 08:25:29.097: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . +Dec 20 08:25:29.097: INFO: update-demo-kitten-z9f2s is verified up and running +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:25:29.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-zhp4c" for this suite. +Dec 20 08:25:51.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:25:51.193: INFO: namespace: e2e-tests-kubectl-zhp4c, resource: bindings, ignored listing per whitelist +Dec 20 08:25:51.339: INFO: namespace e2e-tests-kubectl-zhp4c deletion completed in 22.236054347s + +• [SLOW TEST:93.291 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Update Demo + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should do a rolling update of a replication controller [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[k8s.io] Container Runtime blackbox test when starting a container that exits + should run with the expected status [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Container Runtime + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:25:51.339: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run with the expected status [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpa': should get the expected 'State' +STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpof': should get the expected 'State' +STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpn': should get the expected 'State' +STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] +[AfterEach] [k8s.io] Container Runtime + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:26:24.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-container-runtime-q57nw" for this suite. +Dec 20 08:26:31.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:26:31.138: INFO: namespace: e2e-tests-container-runtime-q57nw, resource: bindings, ignored listing per whitelist +Dec 20 08:26:31.288: INFO: namespace e2e-tests-container-runtime-q57nw deletion completed in 6.292309529s + +• [SLOW TEST:39.949 seconds] +[k8s.io] Container Runtime +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + blackbox test + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 + when starting a container that exits + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 + should run with the expected status [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Variable Expansion + should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Variable Expansion + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:26:31.288: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test substitution in container's command +Dec 20 08:26:31.444: INFO: Waiting up to 5m0s for pod "var-expansion-f440a7cf-0430-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-var-expansion-mrvrs" to be "success or failure" +Dec 20 08:26:31.450: INFO: Pod "var-expansion-f440a7cf-0430-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 5.414288ms +Dec 20 08:26:33.464: INFO: Pod "var-expansion-f440a7cf-0430-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019518011s +Dec 20 08:26:35.473: INFO: Pod "var-expansion-f440a7cf-0430-11e9-b141-0a58ac1c1472": Phase="Running", Reason="", readiness=true. Elapsed: 4.028483067s +Dec 20 08:26:37.481: INFO: Pod "var-expansion-f440a7cf-0430-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036380054s +STEP: Saw pod success +Dec 20 08:26:37.481: INFO: Pod "var-expansion-f440a7cf-0430-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:26:37.486: INFO: Trying to get logs from node 10-6-155-34 pod var-expansion-f440a7cf-0430-11e9-b141-0a58ac1c1472 container dapi-container: +STEP: delete the pod +Dec 20 08:26:37.521: INFO: Waiting for pod var-expansion-f440a7cf-0430-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:26:37.524: INFO: Pod var-expansion-f440a7cf-0430-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [k8s.io] Variable Expansion + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:26:37.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-var-expansion-mrvrs" for this suite. +Dec 20 08:26:43.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:26:43.706: INFO: namespace: e2e-tests-var-expansion-mrvrs, resource: bindings, ignored listing per whitelist +Dec 20 08:26:43.730: INFO: namespace e2e-tests-var-expansion-mrvrs deletion completed in 6.19714388s + +• [SLOW TEST:12.442 seconds] +[k8s.io] Variable Expansion +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should adopt matching pods on creation [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] ReplicationController + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:26:43.730: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching pods on creation [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Given a Pod with a 'name' label pod-adoption is created +STEP: When a replication controller with a matching selector is created +STEP: Then the orphan pod is adopted +[AfterEach] [sig-apps] ReplicationController + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:26:52.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-replication-controller-xfs4v" for this suite. +Dec 20 08:27:14.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:27:15.150: INFO: namespace: e2e-tests-replication-controller-xfs4v, resource: bindings, ignored listing per whitelist +Dec 20 08:27:15.154: INFO: namespace e2e-tests-replication-controller-xfs4v deletion completed in 22.183797145s + +• [SLOW TEST:31.424 seconds] +[sig-apps] ReplicationController +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should adopt matching pods on creation [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[k8s.io] Probing container + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:27:15.154: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 +[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +Dec 20 08:27:35.323: INFO: Container started at 2018-12-20 08:27:18 +0000 UTC, pod became ready at 2018-12-20 08:27:34 +0000 UTC +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:27:35.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-container-probe-c9h9l" for this suite. +Dec 20 08:27:57.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:27:57.410: INFO: namespace: e2e-tests-container-probe-c9h9l, resource: bindings, ignored listing per whitelist +Dec 20 08:27:57.581: INFO: namespace e2e-tests-container-probe-c9h9l deletion completed in 22.250578435s + +• [SLOW TEST:42.427 seconds] +[k8s.io] Probing container +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:27:57.581: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating configMap with name configmap-test-volume-27b4ed88-0431-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test consume configMaps +Dec 20 08:27:57.783: INFO: Waiting up to 5m0s for pod "pod-configmaps-27b61d4d-0431-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-configmap-f4z6q" to be "success or failure" +Dec 20 08:27:57.790: INFO: Pod "pod-configmaps-27b61d4d-0431-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 7.635629ms +Dec 20 08:27:59.815: INFO: Pod "pod-configmaps-27b61d4d-0431-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032008944s +Dec 20 08:28:01.830: INFO: Pod "pod-configmaps-27b61d4d-0431-11e9-b141-0a58ac1c1472": Phase="Running", Reason="", readiness=true. Elapsed: 4.047178527s +Dec 20 08:28:03.845: INFO: Pod "pod-configmaps-27b61d4d-0431-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.062664036s +STEP: Saw pod success +Dec 20 08:28:03.845: INFO: Pod "pod-configmaps-27b61d4d-0431-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:28:03.866: INFO: Trying to get logs from node 10-6-155-34 pod pod-configmaps-27b61d4d-0431-11e9-b141-0a58ac1c1472 container configmap-volume-test: +STEP: delete the pod +Dec 20 08:28:03.922: INFO: Waiting for pod pod-configmaps-27b61d4d-0431-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:28:03.926: INFO: Pod pod-configmaps-27b61d4d-0431-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:28:03.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-configmap-f4z6q" for this suite. +Dec 20 08:28:09.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:28:10.047: INFO: namespace: e2e-tests-configmap-f4z6q, resource: bindings, ignored listing per whitelist +Dec 20 08:28:10.060: INFO: namespace e2e-tests-configmap-f4z6q deletion completed in 6.125350572s + +• [SLOW TEST:12.479 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-apps] ReplicaSet + should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:28:10.060: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Given a Pod with a 'name' label pod-adoption-release is created +STEP: When a replicaset with a matching selector is created +STEP: Then the orphan pod is adopted +STEP: When the matched label of one of its pods change +Dec 20 08:28:19.258: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicaSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:28:19.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-replicaset-zkp2t" for this suite. +Dec 20 08:28:41.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:28:41.492: INFO: namespace: e2e-tests-replicaset-zkp2t, resource: bindings, ignored listing per whitelist +Dec 20 08:28:41.523: INFO: namespace e2e-tests-replicaset-zkp2t deletion completed in 22.22606367s + +• [SLOW TEST:31.463 seconds] +[sig-apps] ReplicaSet +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,default) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:28:41.523: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,default) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test emptydir 0777 on node default medium +Dec 20 08:28:41.707: INFO: Waiting up to 5m0s for pod "pod-41e52d95-0431-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-emptydir-p6mws" to be "success or failure" +Dec 20 08:28:41.714: INFO: Pod "pod-41e52d95-0431-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 7.025114ms +Dec 20 08:28:43.732: INFO: Pod "pod-41e52d95-0431-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024851822s +Dec 20 08:28:45.736: INFO: Pod "pod-41e52d95-0431-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029131236s +STEP: Saw pod success +Dec 20 08:28:45.736: INFO: Pod "pod-41e52d95-0431-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:28:45.747: INFO: Trying to get logs from node 10-6-155-34 pod pod-41e52d95-0431-11e9-b141-0a58ac1c1472 container test-container: +STEP: delete the pod +Dec 20 08:28:45.769: INFO: Waiting for pod pod-41e52d95-0431-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:28:45.772: INFO: Pod pod-41e52d95-0431-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:28:45.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-emptydir-p6mws" for this suite. +Dec 20 08:28:51.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:28:51.820: INFO: namespace: e2e-tests-emptydir-p6mws, resource: bindings, ignored listing per whitelist +Dec 20 08:28:52.018: INFO: namespace e2e-tests-emptydir-p6mws deletion completed in 6.241016365s + +• [SLOW TEST:10.495 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 + should support (non-root,0777,default) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:28:52.019: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward API volume plugin +Dec 20 08:28:52.215: INFO: Waiting up to 5m0s for pod "downwardapi-volume-48274ce2-0431-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-downward-api-pvbjw" to be "success or failure" +Dec 20 08:28:52.220: INFO: Pod "downwardapi-volume-48274ce2-0431-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 5.495599ms +Dec 20 08:28:54.226: INFO: Pod "downwardapi-volume-48274ce2-0431-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011774932s +Dec 20 08:28:56.232: INFO: Pod "downwardapi-volume-48274ce2-0431-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017665455s +Dec 20 08:28:58.237: INFO: Pod "downwardapi-volume-48274ce2-0431-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022378568s +STEP: Saw pod success +Dec 20 08:28:58.237: INFO: Pod "downwardapi-volume-48274ce2-0431-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:28:58.244: INFO: Trying to get logs from node 10-6-155-34 pod downwardapi-volume-48274ce2-0431-11e9-b141-0a58ac1c1472 container client-container: +STEP: delete the pod +Dec 20 08:28:58.310: INFO: Waiting for pod downwardapi-volume-48274ce2-0431-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:28:58.317: INFO: Pod downwardapi-volume-48274ce2-0431-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:28:58.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-downward-api-pvbjw" for this suite. +Dec 20 08:29:04.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:29:04.490: INFO: namespace: e2e-tests-downward-api-pvbjw, resource: bindings, ignored listing per whitelist +Dec 20 08:29:04.515: INFO: namespace e2e-tests-downward-api-pvbjw deletion completed in 6.185426942s + +• [SLOW TEST:12.497 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates resource limits of pods that are allowed to run [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:29:04.516: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 +Dec 20 08:29:04.647: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Dec 20 08:29:04.671: INFO: Waiting for terminating namespaces to be deleted... +Dec 20 08:29:04.676: INFO: +Logging pods the kubelet thinks is on node 10-6-155-33 before test +Dec 20 08:29:04.690: INFO: calico-node-lbxlp from kube-system started at 2018-12-20 07:15:25 +0000 UTC (2 container statuses recorded) +Dec 20 08:29:04.690: INFO: Container calico-node ready: true, restart count 0 +Dec 20 08:29:04.690: INFO: Container install-cni ready: true, restart count 0 +Dec 20 08:29:04.690: INFO: smokeping-sb4jz from kube-system started at 2018-12-13 03:01:41 +0000 UTC (1 container statuses recorded) +Dec 20 08:29:04.690: INFO: Container smokeping ready: true, restart count 5 +Dec 20 08:29:04.690: INFO: wordpress-wordpress-mysql-75d5f8f644-tbzfh from default started at 2018-12-13 03:19:52 +0000 UTC (1 container statuses recorded) +Dec 20 08:29:04.690: INFO: Container wordpress-mysql ready: true, restart count 1 +Dec 20 08:29:04.690: INFO: kube-proxy-84x26 from kube-system started at 2018-12-20 07:15:33 +0000 UTC (1 container statuses recorded) +Dec 20 08:29:04.690: INFO: Container kube-proxy ready: true, restart count 0 +Dec 20 08:29:04.690: INFO: d2048-2048-7b95b48c9b-n6hqw from default started at 2018-12-20 07:19:05 +0000 UTC (1 container statuses recorded) +Dec 20 08:29:04.690: INFO: Container d2048-2048 ready: true, restart count 0 +Dec 20 08:29:04.690: INFO: coredns-87987d698-55xbs from kube-system started at 2018-12-13 03:08:41 +0000 UTC (1 container statuses recorded) +Dec 20 08:29:04.690: INFO: Container coredns ready: true, restart count 1 +Dec 20 08:29:04.690: INFO: coredns-87987d698-4brj5 from kube-system started at 2018-12-17 03:35:16 +0000 UTC (1 container statuses recorded) +Dec 20 08:29:04.690: INFO: Container coredns ready: true, restart count 0 +Dec 20 08:29:04.690: INFO: calico-kube-controllers-5dd6c6f8bc-4xfk4 from kube-system started at 2018-12-17 03:35:16 +0000 UTC (1 container statuses recorded) +Dec 20 08:29:04.690: INFO: Container calico-kube-controllers ready: true, restart count 0 +Dec 20 08:29:04.690: INFO: wordpress-wordpress-97f5cbb67-6j958 from default started at 2018-12-17 03:35:16 +0000 UTC (1 container statuses recorded) +Dec 20 08:29:04.690: INFO: Container wordpress-wordpress ready: true, restart count 0 +Dec 20 08:29:04.690: INFO: +Logging pods the kubelet thinks is on node 10-6-155-34 before test +Dec 20 08:29:04.700: INFO: calico-node-mz7bv from kube-system started at 2018-12-20 07:15:25 +0000 UTC (2 container statuses recorded) +Dec 20 08:29:04.700: INFO: Container calico-node ready: true, restart count 0 +Dec 20 08:29:04.700: INFO: Container install-cni ready: true, restart count 0 +Dec 20 08:29:04.700: INFO: sonobuoy from heptio-sonobuoy started at 2018-12-20 07:21:15 +0000 UTC (3 container statuses recorded) +Dec 20 08:29:04.700: INFO: Container cleanup ready: true, restart count 0 +Dec 20 08:29:04.700: INFO: Container forwarder ready: true, restart count 0 +Dec 20 08:29:04.700: INFO: Container kube-sonobuoy ready: true, restart count 0 +Dec 20 08:29:04.700: INFO: kube-proxy-m94wf from kube-system started at 2018-12-20 07:15:39 +0000 UTC (1 container statuses recorded) +Dec 20 08:29:04.700: INFO: Container kube-proxy ready: true, restart count 0 +Dec 20 08:29:04.700: INFO: sonobuoy-e2e-job-b25697b233924eae from heptio-sonobuoy started at 2018-12-20 07:21:27 +0000 UTC (2 container statuses recorded) +Dec 20 08:29:04.700: INFO: Container e2e ready: true, restart count 0 +Dec 20 08:29:04.700: INFO: Container sonobuoy-worker ready: true, restart count 0 +[It] validates resource limits of pods that are allowed to run [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: verifying the node has the label node 10-6-155-33 +STEP: verifying the node has the label node 10-6-155-34 +Dec 20 08:29:04.746: INFO: Pod d2048-2048-7b95b48c9b-n6hqw requesting resource cpu=50m on Node 10-6-155-33 +Dec 20 08:29:04.746: INFO: Pod wordpress-wordpress-97f5cbb67-6j958 requesting resource cpu=500m on Node 10-6-155-33 +Dec 20 08:29:04.746: INFO: Pod wordpress-wordpress-mysql-75d5f8f644-tbzfh requesting resource cpu=500m on Node 10-6-155-33 +Dec 20 08:29:04.746: INFO: Pod sonobuoy requesting resource cpu=0m on Node 10-6-155-34 +Dec 20 08:29:04.746: INFO: Pod sonobuoy-e2e-job-b25697b233924eae requesting resource cpu=0m on Node 10-6-155-34 +Dec 20 08:29:04.746: INFO: Pod calico-kube-controllers-5dd6c6f8bc-4xfk4 requesting resource cpu=412m on Node 10-6-155-33 +Dec 20 08:29:04.746: INFO: Pod calico-node-lbxlp requesting resource cpu=250m on Node 10-6-155-33 +Dec 20 08:29:04.746: INFO: Pod calico-node-mz7bv requesting resource cpu=250m on Node 10-6-155-34 +Dec 20 08:29:04.746: INFO: Pod coredns-87987d698-4brj5 requesting resource cpu=250m on Node 10-6-155-33 +Dec 20 08:29:04.746: INFO: Pod coredns-87987d698-55xbs requesting resource cpu=250m on Node 10-6-155-33 +Dec 20 08:29:04.746: INFO: Pod kube-proxy-84x26 requesting resource cpu=250m on Node 10-6-155-33 +Dec 20 08:29:04.746: INFO: Pod kube-proxy-m94wf requesting resource cpu=250m on Node 10-6-155-34 +Dec 20 08:29:04.746: INFO: Pod smokeping-sb4jz requesting resource cpu=125m on Node 10-6-155-33 +STEP: Starting Pods to consume most of the cluster CPU. +STEP: Creating another pod that requires unavailable amount of CPU. +STEP: Considering event: +Type = [Normal], Name = [filler-pod-4fa274b2-0431-11e9-b141-0a58ac1c1472.1571fd337d40babd], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-xgkdc/filler-pod-4fa274b2-0431-11e9-b141-0a58ac1c1472 to 10-6-155-33] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-4fa274b2-0431-11e9-b141-0a58ac1c1472.1571fd343f9d7fc6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-4fa274b2-0431-11e9-b141-0a58ac1c1472.1571fd3455265ee1], Reason = [Created], Message = [Created container] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-4fa274b2-0431-11e9-b141-0a58ac1c1472.1571fd34750303c1], Reason = [Started], Message = [Started container] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-4fa3eeb3-0431-11e9-b141-0a58ac1c1472.1571fd337ddfc91a], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-xgkdc/filler-pod-4fa3eeb3-0431-11e9-b141-0a58ac1c1472 to 10-6-155-34] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-4fa3eeb3-0431-11e9-b141-0a58ac1c1472.1571fd33fed11f1c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-4fa3eeb3-0431-11e9-b141-0a58ac1c1472.1571fd34139cf109], Reason = [Created], Message = [Created container] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-4fa3eeb3-0431-11e9-b141-0a58ac1c1472.1571fd3422f88295], Reason = [Started], Message = [Started container] +STEP: Considering event: +Type = [Warning], Name = [additional-pod.1571fd34e4c154fc], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] +STEP: removing the label node off the node 10-6-155-33 +STEP: verifying the node doesn't have the label node +STEP: removing the label node off the node 10-6-155-34 +STEP: verifying the node doesn't have the label node +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:29:11.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-sched-pred-xgkdc" for this suite. +Dec 20 08:29:17.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:29:17.982: INFO: namespace: e2e-tests-sched-pred-xgkdc, resource: bindings, ignored listing per whitelist +Dec 20 08:29:18.085: INFO: namespace e2e-tests-sched-pred-xgkdc deletion completed in 6.229085971s +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 + +• [SLOW TEST:13.569 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 + validates resource limits of pods that are allowed to run [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:29:18.085: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 +STEP: Creating service test in namespace e2e-tests-statefulset-2nl8l +[It] should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a new StaefulSet +Dec 20 08:29:18.267: INFO: Found 0 stateful pods, waiting for 3 +Dec 20 08:29:28.273: INFO: Found 2 stateful pods, waiting for 3 +Dec 20 08:29:38.273: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Dec 20 08:29:38.273: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Dec 20 08:29:38.273: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine +Dec 20 08:29:38.307: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Not applying an update when the partition is greater than the number of replicas +STEP: Performing a canary update +Dec 20 08:29:48.345: INFO: Updating stateful set ss2 +Dec 20 08:29:48.354: INFO: Waiting for Pod e2e-tests-statefulset-2nl8l/ss2-2 to have revision ss2-c79899b9 update revision ss2-787997d666 +STEP: Restoring Pods to the correct revision when they are deleted +Dec 20 08:29:58.428: INFO: Found 2 stateful pods, waiting for 3 +Dec 20 08:30:08.436: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Dec 20 08:30:08.436: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Dec 20 08:30:08.436: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Performing a phased rolling update +Dec 20 08:30:08.490: INFO: Updating stateful set ss2 +Dec 20 08:30:08.519: INFO: Waiting for Pod e2e-tests-statefulset-2nl8l/ss2-1 to have revision ss2-c79899b9 update revision ss2-787997d666 +Dec 20 08:30:18.559: INFO: Updating stateful set ss2 +Dec 20 08:30:18.566: INFO: Waiting for StatefulSet e2e-tests-statefulset-2nl8l/ss2 to complete update +Dec 20 08:30:18.566: INFO: Waiting for Pod e2e-tests-statefulset-2nl8l/ss2-0 to have revision ss2-c79899b9 update revision ss2-787997d666 +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 +Dec 20 08:30:28.579: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2nl8l +Dec 20 08:30:28.596: INFO: Scaling statefulset ss2 to 0 +Dec 20 08:31:08.626: INFO: Waiting for statefulset status.replicas updated to 0 +Dec 20 08:31:08.629: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:31:08.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-statefulset-2nl8l" for this suite. +Dec 20 08:31:14.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:31:14.693: INFO: namespace: e2e-tests-statefulset-2nl8l, resource: bindings, ignored listing per whitelist +Dec 20 08:31:14.864: INFO: namespace e2e-tests-statefulset-2nl8l deletion completed in 6.21265979s + +• [SLOW TEST:116.779 seconds] +[sig-apps] StatefulSet +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:31:14.864: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating projection with secret that has name projected-secret-test-9d43592a-0431-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test consume secrets +Dec 20 08:31:15.006: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9d4442b7-0431-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-dblcs" to be "success or failure" +Dec 20 08:31:15.014: INFO: Pod "pod-projected-secrets-9d4442b7-0431-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 7.943833ms +Dec 20 08:31:17.021: INFO: Pod "pod-projected-secrets-9d4442b7-0431-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015170376s +Dec 20 08:31:19.026: INFO: Pod "pod-projected-secrets-9d4442b7-0431-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019534603s +Dec 20 08:31:21.031: INFO: Pod "pod-projected-secrets-9d4442b7-0431-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024988262s +STEP: Saw pod success +Dec 20 08:31:21.031: INFO: Pod "pod-projected-secrets-9d4442b7-0431-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:31:21.035: INFO: Trying to get logs from node 10-6-155-34 pod pod-projected-secrets-9d4442b7-0431-11e9-b141-0a58ac1c1472 container projected-secret-volume-test: +STEP: delete the pod +Dec 20 08:31:21.062: INFO: Waiting for pod pod-projected-secrets-9d4442b7-0431-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:31:21.066: INFO: Pod pod-projected-secrets-9d4442b7-0431-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:31:21.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-dblcs" for this suite. +Dec 20 08:31:27.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:31:27.121: INFO: namespace: e2e-tests-projected-dblcs, resource: bindings, ignored listing per whitelist +Dec 20 08:31:27.264: INFO: namespace e2e-tests-projected-dblcs deletion completed in 6.191410311s + +• [SLOW TEST:12.400 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should delete old replica sets [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:31:27.264: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 +[It] deployment should delete old replica sets [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +Dec 20 08:31:27.433: INFO: Pod name cleanup-pod: Found 0 pods out of 1 +Dec 20 08:31:32.438: INFO: Pod name cleanup-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Dec 20 08:31:32.438: INFO: Creating deployment test-cleanup-deployment +STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 +Dec 20 08:31:36.482: INFO: Deployment "test-cleanup-deployment": +&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-ktfmx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ktfmx/deployments/test-cleanup-deployment,UID:a7aabad0-0431-11e9-b07b-0242ac120004,ResourceVersion:963653,Generation:1,CreationTimestamp:2018-12-20 08:31:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2018-12-20 08:31:32 +0000 UTC 2018-12-20 08:31:32 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2018-12-20 08:31:36 +0000 UTC 2018-12-20 08:31:32 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-7dbbfcf846" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} + +Dec 20 08:31:36.489: INFO: New ReplicaSet "test-cleanup-deployment-7dbbfcf846" of Deployment "test-cleanup-deployment": +&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-7dbbfcf846,GenerateName:,Namespace:e2e-tests-deployment-ktfmx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ktfmx/replicasets/test-cleanup-deployment-7dbbfcf846,UID:a7acb4ba-0431-11e9-b07b-0242ac120004,ResourceVersion:963644,Generation:1,CreationTimestamp:2018-12-20 08:31:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 7dbbfcf846,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment a7aabad0-0431-11e9-b07b-0242ac120004 0xc001b6e157 0xc001b6e158}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 7dbbfcf846,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 7dbbfcf846,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} +Dec 20 08:31:36.496: INFO: Pod "test-cleanup-deployment-7dbbfcf846-v59rg" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-7dbbfcf846-v59rg,GenerateName:test-cleanup-deployment-7dbbfcf846-,Namespace:e2e-tests-deployment-ktfmx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ktfmx/pods/test-cleanup-deployment-7dbbfcf846-v59rg,UID:a7ad6397-0431-11e9-b07b-0242ac120004,ResourceVersion:963643,Generation:0,CreationTimestamp:2018-12-20 08:31:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 7dbbfcf846,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-7dbbfcf846 a7acb4ba-0431-11e9-b07b-0242ac120004 0xc001b6e827 0xc001b6e828}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gq9dh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gq9dh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-gq9dh true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10-6-155-34,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b6e8b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b6e8d0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:31:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:31:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:31:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:31:32 +0000 UTC }],Message:,Reason:,HostIP:10.6.155.34,PodIP:172.28.20.111,StartTime:2018-12-20 08:31:32 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2018-12-20 08:31:36 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis-amd64:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis-amd64@sha256:2238f5a02d2648d41cc94a88f084060fbfa860890220328eb92696bf2ac649c9 docker://e1eb3bee4f4bff19e101b0864cd19b56283685b56c84617414bf0da50737f97e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:31:36.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-deployment-ktfmx" for this suite. +Dec 20 08:31:42.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:31:42.579: INFO: namespace: e2e-tests-deployment-ktfmx, resource: bindings, ignored listing per whitelist +Dec 20 08:31:42.742: INFO: namespace e2e-tests-deployment-ktfmx deletion completed in 6.23583602s + +• [SLOW TEST:15.479 seconds] +[sig-apps] Deployment +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + deployment should delete old replica sets [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:31:42.742: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Dec 20 08:31:42.906: INFO: Waiting up to 5m0s for pod "pod-ade5a63d-0431-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-emptydir-4f2v8" to be "success or failure" +Dec 20 08:31:42.910: INFO: Pod "pod-ade5a63d-0431-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.854839ms +Dec 20 08:31:44.927: INFO: Pod "pod-ade5a63d-0431-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021040602s +Dec 20 08:31:46.933: INFO: Pod "pod-ade5a63d-0431-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027067115s +Dec 20 08:31:48.937: INFO: Pod "pod-ade5a63d-0431-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0317723s +STEP: Saw pod success +Dec 20 08:31:48.937: INFO: Pod "pod-ade5a63d-0431-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:31:48.940: INFO: Trying to get logs from node 10-6-155-34 pod pod-ade5a63d-0431-11e9-b141-0a58ac1c1472 container test-container: +STEP: delete the pod +Dec 20 08:31:48.963: INFO: Waiting for pod pod-ade5a63d-0431-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:31:48.965: INFO: Pod pod-ade5a63d-0431-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:31:48.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-emptydir-4f2v8" for this suite. +Dec 20 08:31:54.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:31:55.118: INFO: namespace: e2e-tests-emptydir-4f2v8, resource: bindings, ignored listing per whitelist +Dec 20 08:31:55.160: INFO: namespace e2e-tests-emptydir-4f2v8 deletion completed in 6.187616898s + +• [SLOW TEST:12.418 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 + should support (root,0777,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should support proportional scaling [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:31:55.160: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 +[It] deployment should support proportional scaling [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +Dec 20 08:31:55.309: INFO: Creating deployment "nginx-deployment" +Dec 20 08:31:55.318: INFO: Waiting for observed generation 1 +Dec 20 08:31:57.332: INFO: Waiting for all required pods to come up +Dec 20 08:31:57.342: INFO: Pod name nginx: Found 10 pods out of 10 +STEP: ensuring each pod is running +Dec 20 08:32:07.366: INFO: Waiting for deployment "nginx-deployment" to complete +Dec 20 08:32:07.375: INFO: Updating deployment "nginx-deployment" with a non-existent image +Dec 20 08:32:07.386: INFO: Updating deployment nginx-deployment +Dec 20 08:32:07.386: INFO: Waiting for observed generation 2 +Dec 20 08:32:09.395: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 +Dec 20 08:32:09.404: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 +Dec 20 08:32:09.415: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas +Dec 20 08:32:09.435: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 +Dec 20 08:32:09.435: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 +Dec 20 08:32:09.443: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas +Dec 20 08:32:09.451: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas +Dec 20 08:32:09.452: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 +Dec 20 08:32:09.464: INFO: Updating deployment nginx-deployment +Dec 20 08:32:09.464: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas +Dec 20 08:32:09.476: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 +Dec 20 08:32:09.488: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 +Dec 20 08:32:09.500: INFO: Deployment "nginx-deployment": +&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-fbktq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fbktq/deployments/nginx-deployment,UID:b54bbb08-0431-11e9-b07b-0242ac120004,ResourceVersion:963983,Generation:3,CreationTimestamp:2018-12-20 08:31:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[{Progressing True 2018-12-20 08:32:07 +0000 UTC 2018-12-20 08:31:55 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-65bbdb5f8" is progressing.} {Available False 2018-12-20 08:32:09 +0000 UTC 2018-12-20 08:32:09 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} + +Dec 20 08:32:09.522: INFO: New ReplicaSet "nginx-deployment-65bbdb5f8" of Deployment "nginx-deployment": +&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8,GenerateName:,Namespace:e2e-tests-deployment-fbktq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fbktq/replicasets/nginx-deployment-65bbdb5f8,UID:bc7db5ce-0431-11e9-b07b-0242ac120004,ResourceVersion:963977,Generation:3,CreationTimestamp:2018-12-20 08:32:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment b54bbb08-0431-11e9-b07b-0242ac120004 0xc0024867d7 0xc0024867d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Dec 20 08:32:09.522: INFO: All old ReplicaSets of Deployment "nginx-deployment": +Dec 20 08:32:09.524: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965,GenerateName:,Namespace:e2e-tests-deployment-fbktq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fbktq/replicasets/nginx-deployment-555b55d965,UID:b54c7258-0431-11e9-b07b-0242ac120004,ResourceVersion:963975,Generation:3,CreationTimestamp:2018-12-20 08:31:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment b54bbb08-0431-11e9-b07b-0242ac120004 0xc002486717 0xc002486718}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} +Dec 20 08:32:09.572: INFO: Pod "nginx-deployment-555b55d965-65lts" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-65lts,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-fbktq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fbktq/pods/nginx-deployment-555b55d965-65lts,UID:b54fa787-0431-11e9-b07b-0242ac120004,ResourceVersion:963862,Generation:0,CreationTimestamp:2018-12-20 08:31:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 b54c7258-0431-11e9-b07b-0242ac120004 0xc002383727 0xc002383728}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6c9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6c9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6c9n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10-6-155-33,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023837a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023837c0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:31:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:31:55 +0000 UTC }],Message:,Reason:,HostIP:10.6.155.33,PodIP:172.28.240.84,StartTime:2018-12-20 08:31:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2018-12-20 08:32:02 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:2abeba7cab34eb197ff7363486a2aa590027388eafd8e740efae7aae1bed28b6 docker://b1fdd1dbcb55832a22196b83bd1f95154a07218cbc12c33be1348b5f733b896e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 20 08:32:09.572: INFO: Pod "nginx-deployment-555b55d965-6kn84" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-6kn84,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-fbktq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fbktq/pods/nginx-deployment-555b55d965-6kn84,UID:bdc3341e-0431-11e9-b07b-0242ac120004,ResourceVersion:964016,Generation:0,CreationTimestamp:2018-12-20 08:32:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 b54c7258-0431-11e9-b07b-0242ac120004 0xc002383870 0xc002383871}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6c9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6c9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6c9n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10-6-155-34,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023838e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002383900}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:09 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 20 08:32:09.572: INFO: Pod "nginx-deployment-555b55d965-6z9f2" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-6z9f2,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-fbktq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fbktq/pods/nginx-deployment-555b55d965-6z9f2,UID:bdc0ea16-0431-11e9-b07b-0242ac120004,ResourceVersion:964015,Generation:0,CreationTimestamp:2018-12-20 08:32:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 b54c7258-0431-11e9-b07b-0242ac120004 0xc002383960 0xc002383961}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6c9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6c9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6c9n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10-6-155-34,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023839d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023839f0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:09 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 20 08:32:09.572: INFO: Pod "nginx-deployment-555b55d965-7txbb" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-7txbb,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-fbktq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fbktq/pods/nginx-deployment-555b55d965-7txbb,UID:bdbf6b53-0431-11e9-b07b-0242ac120004,ResourceVersion:963999,Generation:0,CreationTimestamp:2018-12-20 08:32:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 b54c7258-0431-11e9-b07b-0242ac120004 0xc002383a50 0xc002383a51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6c9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6c9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6c9n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10-6-155-34,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002383ac0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002383ae0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:09 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 20 08:32:09.573: INFO: Pod "nginx-deployment-555b55d965-9gf9c" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-9gf9c,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-fbktq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fbktq/pods/nginx-deployment-555b55d965-9gf9c,UID:b54e9733-0431-11e9-b07b-0242ac120004,ResourceVersion:963867,Generation:0,CreationTimestamp:2018-12-20 08:31:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 b54c7258-0431-11e9-b07b-0242ac120004 0xc002383b40 0xc002383b41}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6c9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6c9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6c9n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10-6-155-33,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002383bc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002383be0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:31:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:31:55 +0000 UTC }],Message:,Reason:,HostIP:10.6.155.33,PodIP:172.28.240.83,StartTime:2018-12-20 08:31:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2018-12-20 08:32:00 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:2abeba7cab34eb197ff7363486a2aa590027388eafd8e740efae7aae1bed28b6 docker://a955b13470e04d5db1e79f1a511c1b7d1b248295ae8703412646259d857e1c67}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 20 08:32:09.573: INFO: Pod "nginx-deployment-555b55d965-h4sgg" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-h4sgg,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-fbktq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fbktq/pods/nginx-deployment-555b55d965-h4sgg,UID:bdc33b9d-0431-11e9-b07b-0242ac120004,ResourceVersion:964006,Generation:0,CreationTimestamp:2018-12-20 08:32:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 b54c7258-0431-11e9-b07b-0242ac120004 0xc002383c90 0xc002383c91}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6c9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6c9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6c9n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002383cf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002383d10}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 20 08:32:09.573: INFO: Pod "nginx-deployment-555b55d965-hkk9q" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-hkk9q,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-fbktq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fbktq/pods/nginx-deployment-555b55d965-hkk9q,UID:bdc31fba-0431-11e9-b07b-0242ac120004,ResourceVersion:964018,Generation:0,CreationTimestamp:2018-12-20 08:32:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 b54c7258-0431-11e9-b07b-0242ac120004 0xc002383d60 0xc002383d61}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6c9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6c9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6c9n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10-6-155-34,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002383dd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002383df0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:09 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 20 08:32:09.573: INFO: Pod "nginx-deployment-555b55d965-hljbp" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-hljbp,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-fbktq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fbktq/pods/nginx-deployment-555b55d965-hljbp,UID:bdbd8bc6-0431-11e9-b07b-0242ac120004,ResourceVersion:963985,Generation:0,CreationTimestamp:2018-12-20 08:32:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 b54c7258-0431-11e9-b07b-0242ac120004 0xc002383e50 0xc002383e51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6c9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6c9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6c9n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10-6-155-34,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002383ec0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002383ee0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:09 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 20 08:32:09.573: INFO: Pod "nginx-deployment-555b55d965-jg7fw" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-jg7fw,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-fbktq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fbktq/pods/nginx-deployment-555b55d965-jg7fw,UID:bdbf7d86-0431-11e9-b07b-0242ac120004,ResourceVersion:964003,Generation:0,CreationTimestamp:2018-12-20 08:32:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 b54c7258-0431-11e9-b07b-0242ac120004 0xc002383f40 0xc002383f41}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6c9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6c9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6c9n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10-6-155-34,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002383fb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002383fd0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:09 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 20 08:32:09.573: INFO: Pod "nginx-deployment-555b55d965-llwmt" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-llwmt,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-fbktq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fbktq/pods/nginx-deployment-555b55d965-llwmt,UID:b54e89a5-0431-11e9-b07b-0242ac120004,ResourceVersion:963885,Generation:0,CreationTimestamp:2018-12-20 08:31:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 b54c7258-0431-11e9-b07b-0242ac120004 0xc0019500f0 0xc0019500f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6c9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6c9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6c9n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10-6-155-34,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019501d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019501f0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:31:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:31:55 +0000 UTC }],Message:,Reason:,HostIP:10.6.155.34,PodIP:172.28.20.105,StartTime:2018-12-20 08:31:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2018-12-20 08:32:02 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:2abeba7cab34eb197ff7363486a2aa590027388eafd8e740efae7aae1bed28b6 docker://49afba03d10b1a5ff8f686f67c6aaa6dc01737b87f1acc6f998f096ce0130e36}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 20 08:32:09.574: INFO: Pod "nginx-deployment-555b55d965-lvn97" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-lvn97,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-fbktq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fbktq/pods/nginx-deployment-555b55d965-lvn97,UID:b54f98c6-0431-11e9-b07b-0242ac120004,ResourceVersion:963859,Generation:0,CreationTimestamp:2018-12-20 08:31:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 b54c7258-0431-11e9-b07b-0242ac120004 0xc0019502a0 0xc0019502a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6c9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6c9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6c9n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10-6-155-33,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019503a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001950420}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:31:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:31:55 +0000 UTC }],Message:,Reason:,HostIP:10.6.155.33,PodIP:172.28.240.68,StartTime:2018-12-20 08:31:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2018-12-20 08:32:01 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:2abeba7cab34eb197ff7363486a2aa590027388eafd8e740efae7aae1bed28b6 docker://c61d5a8a6c3c085d0d7dc217d42690327c3156b9dc9809288af566905613b85e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 20 08:32:09.574: INFO: Pod "nginx-deployment-555b55d965-mvpm4" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-mvpm4,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-fbktq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fbktq/pods/nginx-deployment-555b55d965-mvpm4,UID:bdbdc833-0431-11e9-b07b-0242ac120004,ResourceVersion:963986,Generation:0,CreationTimestamp:2018-12-20 08:32:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 b54c7258-0431-11e9-b07b-0242ac120004 0xc001950500 0xc001950501}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6c9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6c9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6c9n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10-6-155-33,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001950570} {node.kubernetes.io/unreachable Exists NoExecute 0xc001950590}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:09 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 20 08:32:09.574: INFO: Pod "nginx-deployment-555b55d965-p2r66" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-p2r66,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-fbktq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fbktq/pods/nginx-deployment-555b55d965-p2r66,UID:bdbccb02-0431-11e9-b07b-0242ac120004,ResourceVersion:963981,Generation:0,CreationTimestamp:2018-12-20 08:32:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 b54c7258-0431-11e9-b07b-0242ac120004 0xc001950660 0xc001950661}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6c9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6c9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6c9n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10-6-155-34,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019506d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019506f0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:09 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 20 08:32:09.575: INFO: Pod "nginx-deployment-555b55d965-pd25m" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-pd25m,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-fbktq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fbktq/pods/nginx-deployment-555b55d965-pd25m,UID:b550bcdd-0431-11e9-b07b-0242ac120004,ResourceVersion:963870,Generation:0,CreationTimestamp:2018-12-20 08:31:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 b54c7258-0431-11e9-b07b-0242ac120004 0xc001950750 0xc001950751}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6c9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6c9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6c9n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10-6-155-33,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019507c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019507e0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:31:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:31:55 +0000 UTC }],Message:,Reason:,HostIP:10.6.155.33,PodIP:172.28.240.80,StartTime:2018-12-20 08:31:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2018-12-20 08:32:02 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:2abeba7cab34eb197ff7363486a2aa590027388eafd8e740efae7aae1bed28b6 docker://dca447ae40ac8629ed9b05f6dde1e27ec380a036c32a0c4da21bbdeb707cd879}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 20 08:32:09.575: INFO: Pod "nginx-deployment-555b55d965-qbjwf" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-qbjwf,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-fbktq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fbktq/pods/nginx-deployment-555b55d965-qbjwf,UID:bdc32f95-0431-11e9-b07b-0242ac120004,ResourceVersion:964017,Generation:0,CreationTimestamp:2018-12-20 08:32:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 b54c7258-0431-11e9-b07b-0242ac120004 0xc001950890 0xc001950891}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6c9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6c9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6c9n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10-6-155-33,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001950900} {node.kubernetes.io/unreachable Exists NoExecute 0xc001950930}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:09 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 20 08:32:09.575: INFO: Pod "nginx-deployment-555b55d965-s7zrq" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-s7zrq,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-fbktq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fbktq/pods/nginx-deployment-555b55d965-s7zrq,UID:bdbfe6b3-0431-11e9-b07b-0242ac120004,ResourceVersion:964000,Generation:0,CreationTimestamp:2018-12-20 08:32:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 b54c7258-0431-11e9-b07b-0242ac120004 0xc0019509a0 0xc0019509a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6c9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6c9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6c9n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10-6-155-33,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001950b10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001950bb0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:09 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 20 08:32:09.575: INFO: Pod "nginx-deployment-555b55d965-sq8c5" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-sq8c5,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-fbktq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fbktq/pods/nginx-deployment-555b55d965-sq8c5,UID:b550c76f-0431-11e9-b07b-0242ac120004,ResourceVersion:963909,Generation:0,CreationTimestamp:2018-12-20 08:31:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 b54c7258-0431-11e9-b07b-0242ac120004 0xc001950c20 0xc001950c21}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6c9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6c9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6c9n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10-6-155-34,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001950da0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001950dc0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:31:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:31:55 +0000 UTC }],Message:,Reason:,HostIP:10.6.155.34,PodIP:172.28.20.100,StartTime:2018-12-20 08:31:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2018-12-20 08:32:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:2abeba7cab34eb197ff7363486a2aa590027388eafd8e740efae7aae1bed28b6 docker://2abb9b8e8c0f9f0bdc55d77faff3bb1f054462056e7fd6254564cf3288f32412}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 20 08:32:09.576: INFO: Pod "nginx-deployment-555b55d965-v2hrn" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-v2hrn,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-fbktq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fbktq/pods/nginx-deployment-555b55d965-v2hrn,UID:bdbf4f44-0431-11e9-b07b-0242ac120004,ResourceVersion:963998,Generation:0,CreationTimestamp:2018-12-20 08:32:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 b54c7258-0431-11e9-b07b-0242ac120004 0xc001950e70 0xc001950e71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6c9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6c9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6c9n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10-6-155-34,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001950ef0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001950f90}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:09 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 20 08:32:09.576: INFO: Pod "nginx-deployment-555b55d965-xzjjq" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-xzjjq,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-fbktq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fbktq/pods/nginx-deployment-555b55d965-xzjjq,UID:b550b78c-0431-11e9-b07b-0242ac120004,ResourceVersion:963903,Generation:0,CreationTimestamp:2018-12-20 08:31:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 b54c7258-0431-11e9-b07b-0242ac120004 0xc001951050 0xc001951051}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6c9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6c9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6c9n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10-6-155-34,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019510c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019510e0}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:31:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:31:55 +0000 UTC }],Message:,Reason:,HostIP:10.6.155.34,PodIP:172.28.20.80,StartTime:2018-12-20 08:31:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2018-12-20 08:32:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:2abeba7cab34eb197ff7363486a2aa590027388eafd8e740efae7aae1bed28b6 docker://7262147bdfa10413806fb354e5a9bf735b9f1df646da801dfa26e9dc43ff1762}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Dec 20 08:32:09.576: INFO: Pod "nginx-deployment-555b55d965-z977b" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-z977b,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-fbktq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fbktq/pods/nginx-deployment-555b55d965-z977b,UID:b54dec90-0431-11e9-b07b-0242ac120004,ResourceVersion:963900,Generation:0,CreationTimestamp:2018-12-20 08:31:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 b54c7258-0431-11e9-b07b-0242ac120004 0xc0019511d0 0xc0019511d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6c9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6c9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6c9n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10-6-155-34,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001951240} {node.kubernetes.io/unreachable Exists NoExecute 0xc001951260}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:31:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:32:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:31:55 +0000 UTC }],Message:,Reason:,HostIP:10.6.155.34,PodIP:172.28.20.75,StartTime:2018-12-20 08:31:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2018-12-20 08:32:01 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:2abeba7cab34eb197ff7363486a2aa590027388eafd8e740efae7aae1bed28b6 docker://c7832262b40a44068ce4724d8247c721e4e3e1d85a0f760c0ec624f65ecf6408}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} + + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:32:09.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-deployment-fbktq" for this suite. +Dec 20 08:32:17.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:32:18.107: INFO: namespace: e2e-tests-deployment-fbktq, resource: bindings, ignored listing per whitelist +Dec 20 08:32:18.284: INFO: namespace e2e-tests-deployment-fbktq deletion completed in 8.591805436s + +• [SLOW TEST:23.124 seconds] +[sig-apps] Deployment +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + deployment should support proportional scaling [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSS +------------------------------ +[k8s.io] Variable Expansion + should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Variable Expansion + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:32:18.285: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test substitution in container's args +Dec 20 08:32:18.757: INFO: Waiting up to 5m0s for pod "var-expansion-c3424c00-0431-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-var-expansion-btgwh" to be "success or failure" +Dec 20 08:32:18.763: INFO: Pod "var-expansion-c3424c00-0431-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 5.821418ms +Dec 20 08:32:20.788: INFO: Pod "var-expansion-c3424c00-0431-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031277876s +Dec 20 08:32:22.839: INFO: Pod "var-expansion-c3424c00-0431-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082450512s +Dec 20 08:32:24.847: INFO: Pod "var-expansion-c3424c00-0431-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090457379s +Dec 20 08:32:26.900: INFO: Pod "var-expansion-c3424c00-0431-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 8.143160729s +Dec 20 08:32:28.908: INFO: Pod "var-expansion-c3424c00-0431-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 10.15160378s +Dec 20 08:32:30.933: INFO: Pod "var-expansion-c3424c00-0431-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 12.176132032s +Dec 20 08:32:32.945: INFO: Pod "var-expansion-c3424c00-0431-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 14.18809503s +Dec 20 08:32:34.964: INFO: Pod "var-expansion-c3424c00-0431-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.20703455s +STEP: Saw pod success +Dec 20 08:32:34.964: INFO: Pod "var-expansion-c3424c00-0431-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:32:34.979: INFO: Trying to get logs from node 10-6-155-34 pod var-expansion-c3424c00-0431-11e9-b141-0a58ac1c1472 container dapi-container: +STEP: delete the pod +Dec 20 08:32:35.020: INFO: Waiting for pod var-expansion-c3424c00-0431-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:32:35.040: INFO: Pod var-expansion-c3424c00-0431-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [k8s.io] Variable Expansion + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:32:35.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-var-expansion-btgwh" for this suite. +Dec 20 08:32:41.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:32:41.201: INFO: namespace: e2e-tests-var-expansion-btgwh, resource: bindings, ignored listing per whitelist +Dec 20 08:32:41.417: INFO: namespace e2e-tests-var-expansion-btgwh deletion completed in 6.336692771s + +• [SLOW TEST:23.132 seconds] +[k8s.io] Variable Expansion +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-storage] Secrets + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:32:41.417: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating secret with name s-test-opt-del-d0e4ff83-0431-11e9-b141-0a58ac1c1472 +STEP: Creating secret with name s-test-opt-upd-d0e50002-0431-11e9-b141-0a58ac1c1472 +STEP: Creating the pod +STEP: Deleting secret s-test-opt-del-d0e4ff83-0431-11e9-b141-0a58ac1c1472 +STEP: Updating secret s-test-opt-upd-d0e50002-0431-11e9-b141-0a58ac1c1472 +STEP: Creating secret with name s-test-opt-create-d0e5003d-0431-11e9-b141-0a58ac1c1472 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:32:51.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-secrets-dt9zb" for this suite. +Dec 20 08:33:13.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:33:13.963: INFO: namespace: e2e-tests-secrets-dt9zb, resource: bindings, ignored listing per whitelist +Dec 20 08:33:14.006: INFO: namespace e2e-tests-secrets-dt9zb deletion completed in 22.16330131s + +• [SLOW TEST:32.589 seconds] +[sig-storage] Secrets +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +S +------------------------------ +[sig-apps] Daemon set [Serial] + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:33:14.006: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 +[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +Dec 20 08:33:14.183: INFO: Creating simple daemon set daemon-set +STEP: Check that daemon pods launch on every node of the cluster. +Dec 20 08:33:14.219: INFO: Number of nodes with available pods: 0 +Dec 20 08:33:14.219: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:33:15.240: INFO: Number of nodes with available pods: 0 +Dec 20 08:33:15.240: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:33:16.232: INFO: Number of nodes with available pods: 0 +Dec 20 08:33:16.232: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:33:17.233: INFO: Number of nodes with available pods: 0 +Dec 20 08:33:17.233: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:33:18.229: INFO: Number of nodes with available pods: 2 +Dec 20 08:33:18.229: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Update daemon pods image. +STEP: Check that daemon pods images are updated. +Dec 20 08:33:18.280: INFO: Wrong image for pod: daemon-set-q2246. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1. +Dec 20 08:33:18.281: INFO: Wrong image for pod: daemon-set-r7jn6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1. +Dec 20 08:33:19.295: INFO: Wrong image for pod: daemon-set-q2246. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1. +Dec 20 08:33:19.295: INFO: Wrong image for pod: daemon-set-r7jn6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1. +Dec 20 08:33:20.293: INFO: Wrong image for pod: daemon-set-q2246. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1. +Dec 20 08:33:20.293: INFO: Wrong image for pod: daemon-set-r7jn6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1. +Dec 20 08:33:21.298: INFO: Wrong image for pod: daemon-set-q2246. Expected: gcr.io/ +Dec 20 08:34:39.292: INFO: Pod daemon-set-q2246 is not available +Dec 20 08:34:40.294: INFO: Pod daemon-set-ws7nl is not available +STEP: Check that daemon pods are still running on every node of the cluster. +Dec 20 08:34:40.312: INFO: Number of nodes with available pods: 1 +Dec 20 08:34:40.312: INFO: Node 10-6-155-34 is running more than one daemon pod +Dec 20 08:34:41.322: INFO: Number of nodes with available pods: 1 +Dec 20 08:34:41.322: INFO: Node 10-6-155-34 is running more than one daemon pod +Dec 20 08:34:42.328: INFO: Number of nodes with available pods: 1 +Dec 20 08:34:42.328: INFO: Node 10-6-155-34 is running more than one daemon pod +Dec 20 08:34:43.327: INFO: Number of nodes with available pods: 1 +Dec 20 08:34:43.327: INFO: Node 10-6-155-34 is running more than one daemon pod +Dec 20 08:34:44.323: INFO: Number of nodes with available pods: 2 +Dec 20 08:34:44.324: INFO: Number of running nodes: 2, number of available pods: 2 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-vkc5g, will wait for the garbage collector to delete the pods +Dec 20 08:34:44.411: INFO: Deleting DaemonSet.extensions daemon-set took: 13.804666ms +Dec 20 08:34:44.511: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.225666ms +Dec 20 08:34:53.217: INFO: Number of nodes with available pods: 0 +Dec 20 08:34:53.217: INFO: Number of running nodes: 0, number of available pods: 0 +Dec 20 08:34:53.223: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-vkc5g/daemonsets","resourceVersion":"964733"},"items":null} + +Dec 20 08:34:53.226: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-vkc5g/pods","resourceVersion":"964733"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:34:53.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-daemonsets-vkc5g" for this suite. +Dec 20 08:34:59.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:34:59.356: INFO: namespace: e2e-tests-daemonsets-vkc5g, resource: bindings, ignored listing per whitelist +Dec 20 08:34:59.456: INFO: namespace e2e-tests-daemonsets-vkc5g deletion completed in 6.208968482s + +• [SLOW TEST:105.450 seconds] +[sig-apps] Daemon set [Serial] +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +S +------------------------------ +[sig-storage] EmptyDir volumes + volume on tmpfs should have the correct mode [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:34:59.457: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test emptydir volume type on tmpfs +Dec 20 08:34:59.593: INFO: Waiting up to 5m0s for pod "pod-23228327-0432-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-emptydir-jjbdd" to be "success or failure" +Dec 20 08:34:59.598: INFO: Pod "pod-23228327-0432-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 5.580674ms +Dec 20 08:35:01.604: INFO: Pod "pod-23228327-0432-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010896841s +Dec 20 08:35:03.614: INFO: Pod "pod-23228327-0432-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020684932s +STEP: Saw pod success +Dec 20 08:35:03.614: INFO: Pod "pod-23228327-0432-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:35:03.618: INFO: Trying to get logs from node 10-6-155-34 pod pod-23228327-0432-11e9-b141-0a58ac1c1472 container test-container: +STEP: delete the pod +Dec 20 08:35:03.654: INFO: Waiting for pod pod-23228327-0432-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:35:03.658: INFO: Pod pod-23228327-0432-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:35:03.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-emptydir-jjbdd" for this suite. +Dec 20 08:35:09.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:35:09.749: INFO: namespace: e2e-tests-emptydir-jjbdd, resource: bindings, ignored listing per whitelist +Dec 20 08:35:09.843: INFO: namespace e2e-tests-emptydir-jjbdd deletion completed in 6.163626663s + +• [SLOW TEST:10.387 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 + volume on tmpfs should have the correct mode [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl version + should check is all data is printed [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:35:09.844: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[It] should check is all data is printed [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +Dec 20 08:35:09.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 version' +Dec 20 08:35:10.186: INFO: stderr: "" +Dec 20 08:35:10.186: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.0\", GitCommit:\"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d\", GitTreeState:\"clean\", BuildDate:\"2018-12-03T21:04:45Z\", GoVersion:\"go1.11.2\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.1\", GitCommit:\"eec55b9ba98609a46fee712359c7b5b365bdd920\", GitTreeState:\"clean\", BuildDate:\"2018-12-13T10:31:33Z\", GoVersion:\"go1.11.2\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:35:10.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-nn85k" for this suite. +Dec 20 08:35:16.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:35:16.348: INFO: namespace: e2e-tests-kubectl-nn85k, resource: bindings, ignored listing per whitelist +Dec 20 08:35:16.370: INFO: namespace e2e-tests-kubectl-nn85k deletion completed in 6.17381159s + +• [SLOW TEST:6.526 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Kubectl version + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should check is all data is printed [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,default) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:35:16.370: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,default) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test emptydir 0666 on node default medium +Dec 20 08:35:16.554: INFO: Waiting up to 5m0s for pod "pod-2d3dceff-0432-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-emptydir-dzh7w" to be "success or failure" +Dec 20 08:35:16.557: INFO: Pod "pod-2d3dceff-0432-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 3.487765ms +Dec 20 08:35:18.562: INFO: Pod "pod-2d3dceff-0432-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008555908s +Dec 20 08:35:20.567: INFO: Pod "pod-2d3dceff-0432-11e9-b141-0a58ac1c1472": Phase="Running", Reason="", readiness=true. Elapsed: 4.013063465s +Dec 20 08:35:22.576: INFO: Pod "pod-2d3dceff-0432-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022238919s +STEP: Saw pod success +Dec 20 08:35:22.576: INFO: Pod "pod-2d3dceff-0432-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:35:22.581: INFO: Trying to get logs from node 10-6-155-34 pod pod-2d3dceff-0432-11e9-b141-0a58ac1c1472 container test-container: +STEP: delete the pod +Dec 20 08:35:22.610: INFO: Waiting for pod pod-2d3dceff-0432-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:35:22.617: INFO: Pod pod-2d3dceff-0432-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:35:22.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-emptydir-dzh7w" for this suite. +Dec 20 08:35:28.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:35:28.722: INFO: namespace: e2e-tests-emptydir-dzh7w, resource: bindings, ignored listing per whitelist +Dec 20 08:35:28.839: INFO: namespace e2e-tests-emptydir-dzh7w deletion completed in 6.210232498s + +• [SLOW TEST:12.469 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 + should support (root,0666,default) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-cli] Kubectl client [k8s.io] Guestbook application + should create and stop a working application [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:35:28.839: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[It] should create and stop a working application [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating all guestbook components +Dec 20 08:35:28.954: INFO: apiVersion: v1 +kind: Service +metadata: + name: redis-slave + labels: + app: redis + role: slave + tier: backend +spec: + ports: + - port: 6379 + selector: + app: redis + role: slave + tier: backend + +Dec 20 08:35:28.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 create -f - --namespace=e2e-tests-kubectl-klvkd' +Dec 20 08:35:29.525: INFO: stderr: "" +Dec 20 08:35:29.525: INFO: stdout: "service/redis-slave created\n" +Dec 20 08:35:29.526: INFO: apiVersion: v1 +kind: Service +metadata: + name: redis-master + labels: + app: redis + role: master + tier: backend +spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: redis + role: master + tier: backend + +Dec 20 08:35:29.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 create -f - --namespace=e2e-tests-kubectl-klvkd' +Dec 20 08:35:29.862: INFO: stderr: "" +Dec 20 08:35:29.862: INFO: stdout: "service/redis-master created\n" +Dec 20 08:35:29.862: INFO: apiVersion: v1 +kind: Service +metadata: + name: frontend + labels: + app: guestbook + tier: frontend +spec: + # if your cluster supports it, uncomment the following to automatically create + # an external load-balanced IP for the frontend service. + # type: LoadBalancer + ports: + - port: 80 + selector: + app: guestbook + tier: frontend + +Dec 20 08:35:29.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 create -f - --namespace=e2e-tests-kubectl-klvkd' +Dec 20 08:35:30.175: INFO: stderr: "" +Dec 20 08:35:30.175: INFO: stdout: "service/frontend created\n" +Dec 20 08:35:30.177: INFO: apiVersion: extensions/v1beta1 +kind: Deployment +metadata: + name: frontend +spec: + replicas: 3 + template: + metadata: + labels: + app: guestbook + tier: frontend + spec: + containers: + - name: php-redis + image: gcr.io/google-samples/gb-frontend:v6 + resources: + requests: + cpu: 100m + memory: 100Mi + env: + - name: GET_HOSTS_FROM + value: dns + # If your cluster config does not include a dns service, then to + # instead access environment variables to find service host + # info, comment out the 'value: dns' line above, and uncomment the + # line below: + # value: env + ports: + - containerPort: 80 + +Dec 20 08:35:30.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 create -f - --namespace=e2e-tests-kubectl-klvkd' +Dec 20 08:35:30.481: INFO: stderr: "" +Dec 20 08:35:30.481: INFO: stdout: "deployment.extensions/frontend created\n" +Dec 20 08:35:30.481: INFO: apiVersion: extensions/v1beta1 +kind: Deployment +metadata: + name: redis-master +spec: + replicas: 1 + template: + metadata: + labels: + app: redis + role: master + tier: backend + spec: + containers: + - name: master + image: gcr.io/kubernetes-e2e-test-images/redis:1.0 + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Dec 20 08:35:30.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 create -f - --namespace=e2e-tests-kubectl-klvkd' +Dec 20 08:35:30.781: INFO: stderr: "" +Dec 20 08:35:30.781: INFO: stdout: "deployment.extensions/redis-master created\n" +Dec 20 08:35:30.781: INFO: apiVersion: extensions/v1beta1 +kind: Deployment +metadata: + name: redis-slave +spec: + replicas: 2 + template: + metadata: + labels: + app: redis + role: slave + tier: backend + spec: + containers: + - name: slave + image: gcr.io/google-samples/gb-redisslave:v3 + resources: + requests: + cpu: 100m + memory: 100Mi + env: + - name: GET_HOSTS_FROM + value: dns + # If your cluster config does not include a dns service, then to + # instead access an environment variable to find the master + # service's host, comment out the 'value: dns' line above, and + # uncomment the line below: + # value: env + ports: + - containerPort: 6379 + +Dec 20 08:35:30.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 create -f - --namespace=e2e-tests-kubectl-klvkd' +Dec 20 08:35:31.078: INFO: stderr: "" +Dec 20 08:35:31.082: INFO: stdout: "deployment.extensions/redis-slave created\n" +STEP: validating guestbook app +Dec 20 08:35:31.082: INFO: Waiting for all frontend pods to be Running. +Dec 20 08:35:41.138: INFO: Waiting for frontend to serve content. +Dec 20 08:35:41.149: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: +Dec 20 08:35:46.484: INFO: Trying to add a new entry to the guestbook. +Dec 20 08:35:47.207: INFO: Verifying that added entry can be retrieved. +Dec 20 08:35:47.259: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Dec 20 08:35:52.296: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Dec 20 08:35:57.322: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Dec 20 08:36:02.352: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +STEP: using delete to clean up resources +Dec 20 08:36:07.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-klvkd' +Dec 20 08:36:07.548: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Dec 20 08:36:07.548: INFO: stdout: "service \"redis-slave\" force deleted\n" +STEP: using delete to clean up resources +Dec 20 08:36:07.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-klvkd' +Dec 20 08:36:07.785: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Dec 20 08:36:07.785: INFO: stdout: "service \"redis-master\" force deleted\n" +STEP: using delete to clean up resources +Dec 20 08:36:07.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-klvkd' +Dec 20 08:36:08.004: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Dec 20 08:36:08.004: INFO: stdout: "service \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Dec 20 08:36:08.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-klvkd' +Dec 20 08:36:08.257: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Dec 20 08:36:08.257: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Dec 20 08:36:08.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-klvkd' +Dec 20 08:36:08.490: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Dec 20 08:36:08.490: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" +STEP: using delete to clean up resources +Dec 20 08:36:08.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-klvkd' +Dec 20 08:36:08.641: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Dec 20 08:36:08.641: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:36:08.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-klvkd" for this suite. +Dec 20 08:36:46.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:36:46.722: INFO: namespace: e2e-tests-kubectl-klvkd, resource: bindings, ignored listing per whitelist +Dec 20 08:36:46.799: INFO: namespace e2e-tests-kubectl-klvkd deletion completed in 38.148511257s + +• [SLOW TEST:77.960 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Guestbook application + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should create and stop a working application [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + Burst scaling should run to completion even with unhealthy pods [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:36:46.799: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 +STEP: Creating service test in namespace e2e-tests-statefulset-zz6zj +[It] Burst scaling should run to completion even with unhealthy pods [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating stateful set ss in namespace e2e-tests-statefulset-zz6zj +STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-zz6zj +Dec 20 08:36:46.920: INFO: Found 0 stateful pods, waiting for 1 +Dec 20 08:36:56.930: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod +Dec 20 08:36:56.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Dec 20 08:36:57.525: INFO: stderr: "" +Dec 20 08:36:57.525: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Dec 20 08:36:57.525: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Dec 20 08:36:57.547: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Dec 20 08:37:07.554: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Dec 20 08:37:07.554: INFO: Waiting for statefulset status.replicas updated to 0 +Dec 20 08:37:07.590: INFO: POD NODE PHASE GRACE CONDITIONS +Dec 20 08:37:07.590: INFO: ss-0 10-6-155-34 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:36:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:36:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:36:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:36:46 +0000 UTC }] +Dec 20 08:37:07.590: INFO: ss-1 Pending [] +Dec 20 08:37:07.590: INFO: +Dec 20 08:37:07.590: INFO: StatefulSet ss has not reached scale 3, at 2 +Dec 20 08:37:08.597: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993049523s +Dec 20 08:37:09.603: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986023913s +Dec 20 08:37:10.610: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.979774164s +Dec 20 08:37:11.616: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.973195096s +Dec 20 08:37:12.623: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.966742643s +Dec 20 08:37:13.631: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.960280501s +Dec 20 08:37:14.638: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.952331843s +Dec 20 08:37:15.647: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.944932588s +Dec 20 08:37:16.657: INFO: Verifying statefulset ss doesn't scale past 3 for another 936.189232ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-zz6zj +Dec 20 08:37:17.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:37:17.961: INFO: stderr: "" +Dec 20 08:37:17.961: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Dec 20 08:37:17.961: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Dec 20 08:37:17.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:37:18.417: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n" +Dec 20 08:37:18.417: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Dec 20 08:37:18.417: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Dec 20 08:37:18.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:37:18.905: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n" +Dec 20 08:37:18.905: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Dec 20 08:37:18.905: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Dec 20 08:37:18.910: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Dec 20 08:37:18.910: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Dec 20 08:37:18.910: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Scale down will not halt with unhealthy stateful pod +Dec 20 08:37:18.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Dec 20 08:37:19.253: INFO: stderr: "" +Dec 20 08:37:19.253: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Dec 20 08:37:19.253: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Dec 20 08:37:19.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Dec 20 08:37:19.561: INFO: stderr: "" +Dec 20 08:37:19.561: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Dec 20 08:37:19.561: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Dec 20 08:37:19.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Dec 20 08:37:19.912: INFO: stderr: "" +Dec 20 08:37:19.912: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Dec 20 08:37:19.912: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Dec 20 08:37:19.912: INFO: Waiting for statefulset status.replicas updated to 0 +Dec 20 08:37:19.920: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 +Dec 20 08:37:29.929: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Dec 20 08:37:29.929: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Dec 20 08:37:29.930: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Dec 20 08:37:29.946: INFO: POD NODE PHASE GRACE CONDITIONS +Dec 20 08:37:29.946: INFO: ss-0 10-6-155-34 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:36:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:37:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:37:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:36:46 +0000 UTC }] +Dec 20 08:37:29.946: INFO: ss-1 10-6-155-33 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:37:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:37:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:37:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:37:07 +0000 UTC }] +Dec 20 08:37:29.946: INFO: ss-2 10-6-155-34 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:37:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:37:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:37:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:37:07 +0000 UTC }] +Dec 20 08:37:29.946: INFO: +Dec 20 08:37:29.946: INFO: StatefulSet ss has not reached scale 0, at 3 +Dec 20 08:37:30.980: INFO: POD NODE PHASE GRACE CONDITIONS +Dec 20 08:37:30.980: INFO: ss-0 10-6-155-34 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:36:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:37:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:37:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:36:46 +0000 UTC }] +Dec 20 08:37:30.980: INFO: ss-1 10-6-155-33 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:37:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:37:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:37:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:37:07 +0000 UTC }] +Dec 20 08:37:30.980: INFO: ss-2 10-6-155-34 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:37:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:37:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:37:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-12-20 08:37:07 +0000 UTC }] +Dec 20 08:37:40.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:37:40.304: INFO: rc: 1 +Dec 20 08:37:40.304: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") + [] 0xc00123f4d0 exit status 1 true [0xc00031ce08 0xc00031ce88 0xc00031cec0] [0xc00031ce08 0xc00031ce88 0xc00031cec0] [0xc00031ce50 0xc00031ceb8] [0x92f8e0 0x92f8e0] 0xc0020446c0 }: +Command stdout: + +stderr: +error: unable to upgrade connection: container not found ("nginx") + +error: +exit status 1 + +Dec 20 08:37:50.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:37:50.451: INFO: rc: 1 +Dec 20 08:37:50.451: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00162e540 exit status 1 true [0xc000db8100 0xc000db8118 0xc000db8130] [0xc000db8100 0xc000db8118 0xc000db8130] [0xc000db8110 0xc000db8128] [0x92f8e0 0x92f8e0] 0xc0019094a0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:38:00.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:38:00.588: INFO: rc: 1 +Dec 20 08:38:00.589: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00143bcb0 exit status 1 true [0xc000fe82a0 0xc000fe82b8 0xc000fe82d0] [0xc000fe82a0 0xc000fe82b8 0xc000fe82d0] [0xc000fe82b0 0xc000fe82c8] [0x92f8e0 0x92f8e0] 0xc002177200 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:38:10.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:38:10.707: INFO: rc: 1 +Dec 20 08:38:10.707: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00123f9b0 exit status 1 true [0xc00031ced0 0xc00031cf08 0xc00031cf50] [0xc00031ced0 0xc00031cf08 0xc00031cf50] [0xc00031cef0 0xc00031cf28] [0x92f8e0 0x92f8e0] 0xc002044d20 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:38:20.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:38:20.850: INFO: rc: 1 +Dec 20 08:38:20.851: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00123fd70 exit status 1 true [0xc00031cf60 0xc00031cfe0 0xc00031d030] [0xc00031cf60 0xc00031cfe0 0xc00031d030] [0xc00031cfb0 0xc00031d010] [0x92f8e0 0x92f8e0] 0xc002045680 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:38:30.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:38:30.981: INFO: rc: 1 +Dec 20 08:38:30.981: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc001814120 exit status 1 true [0xc00031d040 0xc00031d0b0 0xc00031d108] [0xc00031d040 0xc00031d0b0 0xc00031d108] [0xc00031d080 0xc00031d0e0] [0x92f8e0 0x92f8e0] 0xc002045e60 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:38:40.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:38:41.117: INFO: rc: 1 +Dec 20 08:38:41.117: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00162ef60 exit status 1 true [0xc000db8138 0xc000db8150 0xc000db8168] [0xc000db8138 0xc000db8150 0xc000db8168] [0xc000db8148 0xc000db8160] [0x92f8e0 0x92f8e0] 0xc0017c2780 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:38:51.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:38:51.247: INFO: rc: 1 +Dec 20 08:38:51.247: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc001c06090 exit status 1 true [0xc000fe82d8 0xc000fe82f0 0xc000fe8308] [0xc000fe82d8 0xc000fe82f0 0xc000fe8308] [0xc000fe82e8 0xc000fe8300] [0x92f8e0 0x92f8e0] 0xc0021775c0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:39:01.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:39:01.389: INFO: rc: 1 +Dec 20 08:39:01.389: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00162f350 exit status 1 true [0xc000db8170 0xc000db8188 0xc000db81a0] [0xc000db8170 0xc000db8188 0xc000db81a0] [0xc000db8180 0xc000db8198] [0x92f8e0 0x92f8e0] 0xc0017c3da0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:39:11.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:39:11.530: INFO: rc: 1 +Dec 20 08:39:11.530: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc0018144b0 exit status 1 true [0xc00031d120 0xc00031d1a0 0xc00031d208] [0xc00031d120 0xc00031d1a0 0xc00031d208] [0xc00031d168 0xc00031d1d8] [0x92f8e0 0x92f8e0] 0xc000c90a20 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:39:21.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:39:21.670: INFO: rc: 1 +Dec 20 08:39:21.670: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00123e420 exit status 1 true [0xc0004cb1d0 0xc0004cbc60 0xc0004cbdd8] [0xc0004cb1d0 0xc0004cbc60 0xc0004cbdd8] [0xc0004cb2e0 0xc0004cbd68] [0x92f8e0 0x92f8e0] 0xc0017c33e0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:39:31.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:39:31.792: INFO: rc: 1 +Dec 20 08:39:31.792: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00111a4e0 exit status 1 true [0xc00031c018 0xc00031c220 0xc00031c3b0] [0xc00031c018 0xc00031c220 0xc00031c3b0] [0xc00031c218 0xc00031c348] [0x92f8e0 0x92f8e0] 0xc002044600 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:39:41.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:39:41.919: INFO: rc: 1 +Dec 20 08:39:41.919: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00143a3c0 exit status 1 true [0xc000db8000 0xc000db8018 0xc000db8030] [0xc000db8000 0xc000db8018 0xc000db8030] [0xc000db8010 0xc000db8028] [0x92f8e0 0x92f8e0] 0xc0019083c0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:39:51.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:39:52.045: INFO: rc: 1 +Dec 20 08:39:52.045: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00111aa80 exit status 1 true [0xc00031c458 0xc00031c828 0xc00031c928] [0xc00031c458 0xc00031c828 0xc00031c928] [0xc00031c560 0xc00031c8d0] [0x92f8e0 0x92f8e0] 0xc002044c60 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:40:02.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:40:02.204: INFO: rc: 1 +Dec 20 08:40:02.204: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00111af00 exit status 1 true [0xc00031c938 0xc00031c970 0xc00031c9d8] [0xc00031c938 0xc00031c970 0xc00031c9d8] [0xc00031c960 0xc00031c9c8] [0x92f8e0 0x92f8e0] 0xc002045500 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:40:12.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:40:12.340: INFO: rc: 1 +Dec 20 08:40:12.340: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00143a900 exit status 1 true [0xc000db8038 0xc000db8050 0xc000db8078] [0xc000db8038 0xc000db8050 0xc000db8078] [0xc000db8048 0xc000db8070] [0x92f8e0 0x92f8e0] 0xc001908960 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:40:22.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:40:22.478: INFO: rc: 1 +Dec 20 08:40:22.478: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00111b410 exit status 1 true [0xc00031c9f8 0xc00031ca58 0xc00031ca78] [0xc00031c9f8 0xc00031ca58 0xc00031ca78] [0xc00031ca30 0xc00031ca70] [0x92f8e0 0x92f8e0] 0xc002045da0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:40:32.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:40:32.604: INFO: rc: 1 +Dec 20 08:40:32.604: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00111b860 exit status 1 true [0xc00031ca88 0xc00031cad8 0xc00031cba8] [0xc00031ca88 0xc00031cad8 0xc00031cba8] [0xc00031cac8 0xc00031cb70] [0x92f8e0 0x92f8e0] 0xc0020fa2a0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:40:42.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:40:42.753: INFO: rc: 1 +Dec 20 08:40:42.754: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00143b380 exit status 1 true [0xc000db8080 0xc000db8098 0xc000db80b0] [0xc000db8080 0xc000db8098 0xc000db80b0] [0xc000db8090 0xc000db80a8] [0x92f8e0 0x92f8e0] 0xc001908d80 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:40:52.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:40:52.871: INFO: rc: 1 +Dec 20 08:40:52.871: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00111bbf0 exit status 1 true [0xc00031cbb0 0xc00031cbe8 0xc00031ccb8] [0xc00031cbb0 0xc00031cbe8 0xc00031ccb8] [0xc00031cbd0 0xc00031cca0] [0x92f8e0 0x92f8e0] 0xc0020fa7e0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:41:02.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:41:02.992: INFO: rc: 1 +Dec 20 08:41:02.992: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00111bfb0 exit status 1 true [0xc00031cd00 0xc00031cd70 0xc00031cd98] [0xc00031cd00 0xc00031cd70 0xc00031cd98] [0xc00031cd60 0xc00031cd90] [0x92f8e0 0x92f8e0] 0xc0020fad80 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:41:12.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:41:13.142: INFO: rc: 1 +Dec 20 08:41:13.142: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc000e52540 exit status 1 true [0xc000be4000 0xc000be4018 0xc000be4030] [0xc000be4000 0xc000be4018 0xc000be4030] [0xc000be4010 0xc000be4028] [0x92f8e0 0x92f8e0] 0xc000c908a0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:41:23.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:41:23.265: INFO: rc: 1 +Dec 20 08:41:23.265: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00111a510 exit status 1 true [0xc000be4000 0xc000be4018 0xc000be4030] [0xc000be4000 0xc000be4018 0xc000be4030] [0xc000be4010 0xc000be4028] [0x92f8e0 0x92f8e0] 0xc002044600 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:41:33.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:41:33.394: INFO: rc: 1 +Dec 20 08:41:33.394: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00111aab0 exit status 1 true [0xc000be4038 0xc000be4050 0xc000be4068] [0xc000be4038 0xc000be4050 0xc000be4068] [0xc000be4048 0xc000be4060] [0x92f8e0 0x92f8e0] 0xc002044c60 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:41:43.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:41:43.523: INFO: rc: 1 +Dec 20 08:41:43.523: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc0017b1ef0 exit status 1 true [0xc00031c018 0xc00031c220 0xc00031c3b0] [0xc00031c018 0xc00031c220 0xc00031c3b0] [0xc00031c218 0xc00031c348] [0x92f8e0 0x92f8e0] 0xc000c906c0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:41:53.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:41:53.640: INFO: rc: 1 +Dec 20 08:41:53.640: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00123e360 exit status 1 true [0xc00031c458 0xc00031c828 0xc00031c928] [0xc00031c458 0xc00031c828 0xc00031c928] [0xc00031c560 0xc00031c8d0] [0x92f8e0 0x92f8e0] 0xc000c91320 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:42:03.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:42:03.753: INFO: rc: 1 +Dec 20 08:42:03.753: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00123e750 exit status 1 true [0xc00031c938 0xc00031c970 0xc00031c9d8] [0xc00031c938 0xc00031c970 0xc00031c9d8] [0xc00031c960 0xc00031c9c8] [0x92f8e0 0x92f8e0] 0xc000c91c80 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:42:13.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:42:13.908: INFO: rc: 1 +Dec 20 08:42:13.908: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00123ec00 exit status 1 true [0xc00031c9f8 0xc00031ca58 0xc00031ca78] [0xc00031c9f8 0xc00031ca58 0xc00031ca78] [0xc00031ca30 0xc00031ca70] [0x92f8e0 0x92f8e0] 0xc0020fa180 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:42:23.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:42:24.027: INFO: rc: 1 +Dec 20 08:42:24.027: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc000e52570 exit status 1 true [0xc0004cb1d0 0xc0004cbc60 0xc0004cbdd8] [0xc0004cb1d0 0xc0004cbc60 0xc0004cbdd8] [0xc0004cb2e0 0xc0004cbd68] [0x92f8e0 0x92f8e0] 0xc0017c37a0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:42:34.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:42:34.167: INFO: rc: 1 +Dec 20 08:42:34.167: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00123f200 exit status 1 true [0xc00031ca88 0xc00031cad8 0xc00031cba8] [0xc00031ca88 0xc00031cad8 0xc00031cba8] [0xc00031cac8 0xc00031cb70] [0x92f8e0 0x92f8e0] 0xc0020fa660 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Dec 20 08:42:44.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-zz6zj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:42:44.296: INFO: rc: 1 +Dec 20 08:42:44.296: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: +Dec 20 08:42:44.296: INFO: Scaling statefulset ss to 0 +Dec 20 08:42:44.310: INFO: Waiting for statefulset status.replicas updated to 0 +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 +Dec 20 08:42:44.313: INFO: Deleting all statefulset in ns e2e-tests-statefulset-zz6zj +Dec 20 08:42:44.316: INFO: Scaling statefulset ss to 0 +Dec 20 08:42:44.337: INFO: Waiting for statefulset status.replicas updated to 0 +Dec 20 08:42:44.343: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:42:44.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-statefulset-zz6zj" for this suite. +Dec 20 08:42:50.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:42:50.530: INFO: namespace: e2e-tests-statefulset-zz6zj, resource: bindings, ignored listing per whitelist +Dec 20 08:42:50.575: INFO: namespace e2e-tests-statefulset-zz6zj deletion completed in 6.203857993s + +• [SLOW TEST:363.776 seconds] +[sig-apps] StatefulSet +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + Burst scaling should run to completion even with unhealthy pods [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should retry creating failed daemon pods [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:42:50.575: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 +[It] should retry creating failed daemon pods [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Dec 20 08:42:50.773: INFO: Number of nodes with available pods: 0 +Dec 20 08:42:50.773: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:42:51.787: INFO: Number of nodes with available pods: 0 +Dec 20 08:42:51.787: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:42:52.788: INFO: Number of nodes with available pods: 0 +Dec 20 08:42:52.788: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:42:53.792: INFO: Number of nodes with available pods: 0 +Dec 20 08:42:53.792: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:42:54.783: INFO: Number of nodes with available pods: 1 +Dec 20 08:42:54.783: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:42:55.782: INFO: Number of nodes with available pods: 2 +Dec 20 08:42:55.782: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. +Dec 20 08:42:55.807: INFO: Number of nodes with available pods: 1 +Dec 20 08:42:55.807: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:42:56.821: INFO: Number of nodes with available pods: 1 +Dec 20 08:42:56.821: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:42:57.822: INFO: Number of nodes with available pods: 1 +Dec 20 08:42:57.822: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:42:58.827: INFO: Number of nodes with available pods: 1 +Dec 20 08:42:58.827: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:42:59.817: INFO: Number of nodes with available pods: 1 +Dec 20 08:42:59.817: INFO: Node 10-6-155-33 is running more than one daemon pod +Dec 20 08:43:00.819: INFO: Number of nodes with available pods: 2 +Dec 20 08:43:00.819: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Wait for the failed daemon pod to be completely deleted. +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-jsxjj, will wait for the garbage collector to delete the pods +Dec 20 08:43:00.914: INFO: Deleting DaemonSet.extensions daemon-set took: 33.184916ms +Dec 20 08:43:01.014: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.480661ms +Dec 20 08:43:43.220: INFO: Number of nodes with available pods: 0 +Dec 20 08:43:43.220: INFO: Number of running nodes: 0, number of available pods: 0 +Dec 20 08:43:43.226: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-jsxjj/daemonsets","resourceVersion":"966037"},"items":null} + +Dec 20 08:43:43.231: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-jsxjj/pods","resourceVersion":"966037"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:43:43.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-daemonsets-jsxjj" for this suite. +Dec 20 08:43:49.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:43:49.332: INFO: namespace: e2e-tests-daemonsets-jsxjj, resource: bindings, ignored listing per whitelist +Dec 20 08:43:49.457: INFO: namespace e2e-tests-daemonsets-jsxjj deletion completed in 6.189481087s + +• [SLOW TEST:58.882 seconds] +[sig-apps] Daemon set [Serial] +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should retry creating failed daemon pods [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-storage] Downward API volume + should set mode on item file [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:43:49.457: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should set mode on item file [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward API volume plugin +Dec 20 08:43:49.637: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5f0eab12-0433-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-downward-api-t7mp4" to be "success or failure" +Dec 20 08:43:49.658: INFO: Pod "downwardapi-volume-5f0eab12-0433-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 20.436869ms +Dec 20 08:43:51.678: INFO: Pod "downwardapi-volume-5f0eab12-0433-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040319184s +Dec 20 08:43:53.682: INFO: Pod "downwardapi-volume-5f0eab12-0433-11e9-b141-0a58ac1c1472": Phase="Running", Reason="", readiness=true. Elapsed: 4.044276984s +Dec 20 08:43:55.688: INFO: Pod "downwardapi-volume-5f0eab12-0433-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0509706s +STEP: Saw pod success +Dec 20 08:43:55.688: INFO: Pod "downwardapi-volume-5f0eab12-0433-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:43:55.692: INFO: Trying to get logs from node 10-6-155-34 pod downwardapi-volume-5f0eab12-0433-11e9-b141-0a58ac1c1472 container client-container: +STEP: delete the pod +Dec 20 08:43:55.714: INFO: Waiting for pod downwardapi-volume-5f0eab12-0433-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:43:55.722: INFO: Pod downwardapi-volume-5f0eab12-0433-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:43:55.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-downward-api-t7mp4" for this suite. +Dec 20 08:44:01.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:44:01.843: INFO: namespace: e2e-tests-downward-api-t7mp4, resource: bindings, ignored listing per whitelist +Dec 20 08:44:01.903: INFO: namespace e2e-tests-downward-api-t7mp4 deletion completed in 6.176548495s + +• [SLOW TEST:12.446 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should set mode on item file [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +S +------------------------------ +[sig-api-machinery] Garbage collector + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:44:01.904: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: Gathering metrics +W1220 08:44:08.157555 17 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. +Dec 20 08:44:08.157: INFO: For apiserver_request_count: +For apiserver_request_latencies_summary: +For etcd_helper_cache_entry_count: +For etcd_helper_cache_hit_count: +For etcd_helper_cache_miss_count: +For etcd_request_cache_add_latencies_summary: +For etcd_request_cache_get_latencies_summary: +For etcd_request_latencies_summary: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:44:08.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-gc-mkvxd" for this suite. +Dec 20 08:44:14.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:44:14.348: INFO: namespace: e2e-tests-gc-mkvxd, resource: bindings, ignored listing per whitelist +Dec 20 08:44:14.433: INFO: namespace e2e-tests-gc-mkvxd deletion completed in 6.268551416s + +• [SLOW TEST:12.530 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SS +------------------------------ +[sig-node] Downward API + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-node] Downward API + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:44:14.434: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward api env vars +Dec 20 08:44:14.632: INFO: Waiting up to 5m0s for pod "downward-api-6df65ea5-0433-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-downward-api-4wqhw" to be "success or failure" +Dec 20 08:44:14.638: INFO: Pod "downward-api-6df65ea5-0433-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 5.246103ms +Dec 20 08:44:16.644: INFO: Pod "downward-api-6df65ea5-0433-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011134621s +Dec 20 08:44:18.670: INFO: Pod "downward-api-6df65ea5-0433-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037657339s +Dec 20 08:44:20.683: INFO: Pod "downward-api-6df65ea5-0433-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050363677s +Dec 20 08:44:22.691: INFO: Pod "downward-api-6df65ea5-0433-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058201497s +STEP: Saw pod success +Dec 20 08:44:22.691: INFO: Pod "downward-api-6df65ea5-0433-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:44:22.695: INFO: Trying to get logs from node 10-6-155-34 pod downward-api-6df65ea5-0433-11e9-b141-0a58ac1c1472 container dapi-container: +STEP: delete the pod +Dec 20 08:44:22.734: INFO: Waiting for pod downward-api-6df65ea5-0433-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:44:22.741: INFO: Pod downward-api-6df65ea5-0433-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:44:22.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-downward-api-4wqhw" for this suite. +Dec 20 08:44:28.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:44:28.925: INFO: namespace: e2e-tests-downward-api-4wqhw, resource: bindings, ignored listing per whitelist +Dec 20 08:44:29.027: INFO: namespace e2e-tests-downward-api-4wqhw deletion completed in 6.279073285s + +• [SLOW TEST:14.594 seconds] +[sig-node] Downward API +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +S +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:44:29.028: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward API volume plugin +Dec 20 08:44:29.228: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76aa1149-0433-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-gn5pw" to be "success or failure" +Dec 20 08:44:29.234: INFO: Pod "downwardapi-volume-76aa1149-0433-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 5.003988ms +Dec 20 08:44:31.263: INFO: Pod "downwardapi-volume-76aa1149-0433-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034139519s +Dec 20 08:44:33.268: INFO: Pod "downwardapi-volume-76aa1149-0433-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038925127s +Dec 20 08:44:35.272: INFO: Pod "downwardapi-volume-76aa1149-0433-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042791394s +STEP: Saw pod success +Dec 20 08:44:35.272: INFO: Pod "downwardapi-volume-76aa1149-0433-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:44:35.277: INFO: Trying to get logs from node 10-6-155-34 pod downwardapi-volume-76aa1149-0433-11e9-b141-0a58ac1c1472 container client-container: +STEP: delete the pod +Dec 20 08:44:35.307: INFO: Waiting for pod downwardapi-volume-76aa1149-0433-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:44:35.314: INFO: Pod downwardapi-volume-76aa1149-0433-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:44:35.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-gn5pw" for this suite. +Dec 20 08:44:41.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:44:41.454: INFO: namespace: e2e-tests-projected-gn5pw, resource: bindings, ignored listing per whitelist +Dec 20 08:44:41.518: INFO: namespace e2e-tests-projected-gn5pw deletion completed in 6.189904624s + +• [SLOW TEST:12.490 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-node] Downward API + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:44:41.518: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward api env vars +Dec 20 08:44:41.706: INFO: Waiting up to 5m0s for pod "downward-api-7e164378-0433-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-downward-api-jcb7m" to be "success or failure" +Dec 20 08:44:41.714: INFO: Pod "downward-api-7e164378-0433-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 7.965185ms +Dec 20 08:44:43.721: INFO: Pod "downward-api-7e164378-0433-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014909529s +Dec 20 08:44:45.728: INFO: Pod "downward-api-7e164378-0433-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02169018s +Dec 20 08:44:47.732: INFO: Pod "downward-api-7e164378-0433-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026244499s +STEP: Saw pod success +Dec 20 08:44:47.732: INFO: Pod "downward-api-7e164378-0433-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:44:47.736: INFO: Trying to get logs from node 10-6-155-34 pod downward-api-7e164378-0433-11e9-b141-0a58ac1c1472 container dapi-container: +STEP: delete the pod +Dec 20 08:44:47.790: INFO: Waiting for pod downward-api-7e164378-0433-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:44:47.801: INFO: Pod downward-api-7e164378-0433-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:44:47.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-downward-api-jcb7m" for this suite. +Dec 20 08:44:53.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:44:53.937: INFO: namespace: e2e-tests-downward-api-jcb7m, resource: bindings, ignored listing per whitelist +Dec 20 08:44:54.067: INFO: namespace e2e-tests-downward-api-jcb7m deletion completed in 6.23621169s + +• [SLOW TEST:12.549 seconds] +[sig-node] Downward API +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 + should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should be submitted and removed [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:44:54.068: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 +[It] should be submitted and removed [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating the pod +STEP: setting up watch +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: verifying pod creation was observed +Dec 20 08:45:00.268: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-85923b25-0433-11e9-b141-0a58ac1c1472", GenerateName:"", Namespace:"e2e-tests-pods-2g5nl", SelfLink:"/api/v1/namespaces/e2e-tests-pods-2g5nl/pods/pod-submit-remove-85923b25-0433-11e9-b141-0a58ac1c1472", UID:"85925ed3-0433-11e9-b07b-0242ac120004", ResourceVersion:"966483", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63680892294, loc:(*time.Location)(0x7b33b80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"230717853", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-ngcw7", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002805e80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ngcw7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001f6d668), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"10-6-155-34", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001c8f980), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f6d6b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f6d6d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001f6d6d8)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63680892294, loc:(*time.Location)(0x7b33b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63680892298, loc:(*time.Location)(0x7b33b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63680892298, loc:(*time.Location)(0x7b33b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63680892294, loc:(*time.Location)(0x7b33b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.6.155.34", PodIP:"172.28.20.81", StartTime:(*v1.Time)(0xc001fcf9c0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001fcf9e0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:2abeba7cab34eb197ff7363486a2aa590027388eafd8e740efae7aae1bed28b6", ContainerID:"docker://29d9f780183f1033b5d3bfa36934bf4b75930b9fed23a4457a088702add3f352"}}, QOSClass:"BestEffort"}} +STEP: deleting the pod gracefully +STEP: verifying the kubelet observed the termination notice +Dec 20 08:45:05.294: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed +STEP: verifying pod deletion was observed +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:45:05.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-pods-2g5nl" for this suite. +Dec 20 08:45:11.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:45:11.343: INFO: namespace: e2e-tests-pods-2g5nl, resource: bindings, ignored listing per whitelist +Dec 20 08:45:11.533: INFO: namespace e2e-tests-pods-2g5nl deletion completed in 6.222241556s + +• [SLOW TEST:17.465 seconds] +[k8s.io] Pods +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should be submitted and removed [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:45:11.533: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 +STEP: Creating service test in namespace e2e-tests-statefulset-8lbq5 +[It] should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a new StatefulSet +Dec 20 08:45:11.727: INFO: Found 0 stateful pods, waiting for 3 +Dec 20 08:45:21.734: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Dec 20 08:45:21.735: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Dec 20 08:45:21.735: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false +Dec 20 08:45:31.734: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Dec 20 08:45:31.734: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Dec 20 08:45:31.734: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +Dec 20 08:45:31.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-8lbq5 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Dec 20 08:45:32.116: INFO: stderr: "" +Dec 20 08:45:32.116: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Dec 20 08:45:32.116: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine +Dec 20 08:45:42.158: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Updating Pods in reverse ordinal order +Dec 20 08:45:52.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-8lbq5 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:45:52.499: INFO: stderr: "" +Dec 20 08:45:52.499: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Dec 20 08:45:52.499: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Dec 20 08:46:22.525: INFO: Waiting for StatefulSet e2e-tests-statefulset-8lbq5/ss2 to complete update +Dec 20 08:46:22.525: INFO: Waiting for Pod e2e-tests-statefulset-8lbq5/ss2-0 to have revision ss2-c79899b9 update revision ss2-787997d666 +Dec 20 08:46:32.538: INFO: Waiting for StatefulSet e2e-tests-statefulset-8lbq5/ss2 to complete update +STEP: Rolling back to a previous revision +Dec 20 08:46:42.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-8lbq5 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Dec 20 08:46:43.024: INFO: stderr: "" +Dec 20 08:46:43.024: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Dec 20 08:46:43.024: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Dec 20 08:46:53.072: INFO: Updating stateful set ss2 +STEP: Rolling back update in reverse ordinal order +Dec 20 08:47:03.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 exec --namespace=e2e-tests-statefulset-8lbq5 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Dec 20 08:47:03.413: INFO: stderr: "" +Dec 20 08:47:03.413: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Dec 20 08:47:03.413: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Dec 20 08:47:13.462: INFO: Waiting for StatefulSet e2e-tests-statefulset-8lbq5/ss2 to complete update +Dec 20 08:47:13.462: INFO: Waiting for Pod e2e-tests-statefulset-8lbq5/ss2-0 to have revision ss2-787997d666 update revision ss2-c79899b9 +Dec 20 08:47:13.462: INFO: Waiting for Pod e2e-tests-statefulset-8lbq5/ss2-1 to have revision ss2-787997d666 update revision ss2-c79899b9 +Dec 20 08:47:23.476: INFO: Waiting for StatefulSet e2e-tests-statefulset-8lbq5/ss2 to complete update +Dec 20 08:47:23.476: INFO: Waiting for Pod e2e-tests-statefulset-8lbq5/ss2-0 to have revision ss2-787997d666 update revision ss2-c79899b9 +Dec 20 08:47:33.473: INFO: Waiting for StatefulSet e2e-tests-statefulset-8lbq5/ss2 to complete update +Dec 20 08:47:33.473: INFO: Waiting for Pod e2e-tests-statefulset-8lbq5/ss2-0 to have revision ss2-787997d666 update revision ss2-c79899b9 +Dec 20 08:47:43.484: INFO: Waiting for StatefulSet e2e-tests-statefulset-8lbq5/ss2 to complete update +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 +Dec 20 08:47:53.471: INFO: Deleting all statefulset in ns e2e-tests-statefulset-8lbq5 +Dec 20 08:47:53.477: INFO: Scaling statefulset ss2 to 0 +Dec 20 08:48:23.505: INFO: Waiting for statefulset status.replicas updated to 0 +Dec 20 08:48:23.508: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:48:23.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-statefulset-8lbq5" for this suite. +Dec 20 08:48:29.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:48:29.724: INFO: namespace: e2e-tests-statefulset-8lbq5, resource: bindings, ignored listing per whitelist +Dec 20 08:48:29.752: INFO: namespace e2e-tests-statefulset-8lbq5 deletion completed in 6.218741633s + +• [SLOW TEST:198.218 seconds] +[sig-apps] StatefulSet +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +S +------------------------------ +[sig-api-machinery] Watchers + should be able to start watching from a specific resource version [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:48:29.752: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to start watching from a specific resource version [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: modifying the configmap a second time +STEP: deleting the configmap +STEP: creating a watch on configmaps from the resource version returned by the first update +STEP: Expecting to observe notifications for all changes to the configmap after the first update +Dec 20 08:48:29.934: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-pgbll,SelfLink:/api/v1/namespaces/e2e-tests-watch-pgbll/configmaps/e2e-watch-test-resource-version,UID:061e30e7-0434-11e9-b07b-0242ac120004,ResourceVersion:967201,Generation:0,CreationTimestamp:2018-12-20 08:48:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Dec 20 08:48:29.934: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-pgbll,SelfLink:/api/v1/namespaces/e2e-tests-watch-pgbll/configmaps/e2e-watch-test-resource-version,UID:061e30e7-0434-11e9-b07b-0242ac120004,ResourceVersion:967202,Generation:0,CreationTimestamp:2018-12-20 08:48:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +[AfterEach] [sig-api-machinery] Watchers + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:48:29.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-watch-pgbll" for this suite. +Dec 20 08:48:35.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:48:36.165: INFO: namespace: e2e-tests-watch-pgbll, resource: bindings, ignored listing per whitelist +Dec 20 08:48:36.206: INFO: namespace e2e-tests-watch-pgbll deletion completed in 6.263231582s + +• [SLOW TEST:6.454 seconds] +[sig-api-machinery] Watchers +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should be able to start watching from a specific resource version [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should allow opting out of API token automount [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:48:36.206: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow opting out of API token automount [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: getting the auto-created API token +Dec 20 08:48:36.925: INFO: created pod pod-service-account-defaultsa +Dec 20 08:48:36.925: INFO: pod pod-service-account-defaultsa service account token volume mount: true +Dec 20 08:48:36.934: INFO: created pod pod-service-account-mountsa +Dec 20 08:48:36.934: INFO: pod pod-service-account-mountsa service account token volume mount: true +Dec 20 08:48:36.945: INFO: created pod pod-service-account-nomountsa +Dec 20 08:48:36.945: INFO: pod pod-service-account-nomountsa service account token volume mount: false +Dec 20 08:48:36.954: INFO: created pod pod-service-account-defaultsa-mountspec +Dec 20 08:48:36.954: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true +Dec 20 08:48:36.958: INFO: created pod pod-service-account-mountsa-mountspec +Dec 20 08:48:36.958: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true +Dec 20 08:48:36.964: INFO: created pod pod-service-account-nomountsa-mountspec +Dec 20 08:48:36.964: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true +Dec 20 08:48:36.971: INFO: created pod pod-service-account-defaultsa-nomountspec +Dec 20 08:48:36.971: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false +Dec 20 08:48:36.980: INFO: created pod pod-service-account-mountsa-nomountspec +Dec 20 08:48:36.980: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false +Dec 20 08:48:36.990: INFO: created pod pod-service-account-nomountsa-nomountspec +Dec 20 08:48:36.990: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false +[AfterEach] [sig-auth] ServiceAccounts + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:48:36.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-svcaccounts-f85wj" for this suite. +Dec 20 08:48:59.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:48:59.200: INFO: namespace: e2e-tests-svcaccounts-f85wj, resource: bindings, ignored listing per whitelist +Dec 20 08:48:59.233: INFO: namespace e2e-tests-svcaccounts-f85wj deletion completed in 22.229558415s + +• [SLOW TEST:23.027 seconds] +[sig-auth] ServiceAccounts +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 + should allow opting out of API token automount [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:48:59.233: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating configMap with name configmap-test-volume-map-17ac8303-0434-11e9-b141-0a58ac1c1472 +STEP: Creating a pod to test consume configMaps +Dec 20 08:48:59.360: INFO: Waiting up to 5m0s for pod "pod-configmaps-17ad23e8-0434-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-configmap-6hb9v" to be "success or failure" +Dec 20 08:48:59.366: INFO: Pod "pod-configmaps-17ad23e8-0434-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 5.969831ms +Dec 20 08:49:01.371: INFO: Pod "pod-configmaps-17ad23e8-0434-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011240396s +Dec 20 08:49:03.380: INFO: Pod "pod-configmaps-17ad23e8-0434-11e9-b141-0a58ac1c1472": Phase="Running", Reason="", readiness=true. Elapsed: 4.020166952s +Dec 20 08:49:05.385: INFO: Pod "pod-configmaps-17ad23e8-0434-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025101114s +STEP: Saw pod success +Dec 20 08:49:05.385: INFO: Pod "pod-configmaps-17ad23e8-0434-11e9-b141-0a58ac1c1472" satisfied condition "success or failure" +Dec 20 08:49:05.389: INFO: Trying to get logs from node 10-6-155-34 pod pod-configmaps-17ad23e8-0434-11e9-b141-0a58ac1c1472 container configmap-volume-test: +STEP: delete the pod +Dec 20 08:49:05.419: INFO: Waiting for pod pod-configmaps-17ad23e8-0434-11e9-b141-0a58ac1c1472 to disappear +Dec 20 08:49:05.422: INFO: Pod pod-configmaps-17ad23e8-0434-11e9-b141-0a58ac1c1472 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:49:05.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-configmap-6hb9v" for this suite. +Dec 20 08:49:11.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:49:11.604: INFO: namespace: e2e-tests-configmap-6hb9v, resource: bindings, ignored listing per whitelist +Dec 20 08:49:11.641: INFO: namespace e2e-tests-configmap-6hb9v deletion completed in 6.213516038s + +• [SLOW TEST:12.409 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete pods created by rc when not orphaning [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:49:11.642: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete pods created by rc when not orphaning [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: create the rc +STEP: delete the rc +STEP: wait for all pods to be garbage collected +STEP: Gathering metrics +W1220 08:49:21.876078 17 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. +Dec 20 08:49:21.876: INFO: For apiserver_request_count: +For apiserver_request_latencies_summary: +For etcd_helper_cache_entry_count: +For etcd_helper_cache_hit_count: +For etcd_helper_cache_miss_count: +For etcd_request_cache_add_latencies_summary: +For etcd_request_cache_get_latencies_summary: +For etcd_request_latencies_summary: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:49:21.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-gc-5d66z" for this suite. +Dec 20 08:49:27.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:49:27.925: INFO: namespace: e2e-tests-gc-5d66z, resource: bindings, ignored listing per whitelist +Dec 20 08:49:28.066: INFO: namespace e2e-tests-gc-5d66z deletion completed in 6.17657628s + +• [SLOW TEST:16.424 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should delete pods created by rc when not orphaning [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[k8s.io] Pods + should be updated [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:49:28.066: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 +[It] should be updated [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Dec 20 08:49:34.751: INFO: Successfully updated pod "pod-update-28dbe894-0434-11e9-b141-0a58ac1c1472" +STEP: verifying the updated pod is in kubernetes +Dec 20 08:49:34.771: INFO: Pod update OK +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:49:34.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-pods-sxgnz" for this suite. +Dec 20 08:49:56.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:49:56.841: INFO: namespace: e2e-tests-pods-sxgnz, resource: bindings, ignored listing per whitelist +Dec 20 08:49:57.040: INFO: namespace e2e-tests-pods-sxgnz deletion completed in 22.250330364s + +• [SLOW TEST:28.974 seconds] +[k8s.io] Pods +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should be updated [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-apps] ReplicationController + should release no longer matching pods [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] ReplicationController + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:49:57.040: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[It] should release no longer matching pods [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Given a ReplicationController is created +STEP: When the matched label of one of its pods change +Dec 20 08:49:57.235: INFO: Pod name pod-release: Found 0 pods out of 1 +Dec 20 08:50:02.245: INFO: Pod name pod-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicationController + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:50:03.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-replication-controller-zzxcs" for this suite. +Dec 20 08:50:09.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:50:09.413: INFO: namespace: e2e-tests-replication-controller-zzxcs, resource: bindings, ignored listing per whitelist +Dec 20 08:50:09.480: INFO: namespace e2e-tests-replication-controller-zzxcs deletion completed in 6.204231836s + +• [SLOW TEST:12.440 seconds] +[sig-apps] ReplicationController +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should release no longer matching pods [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSS +------------------------------ +[sig-storage] ConfigMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:50:09.480: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating configMap with name configmap-test-upd-41906013-0434-11e9-b141-0a58ac1c1472 +STEP: Creating the pod +STEP: Updating configmap configmap-test-upd-41906013-0434-11e9-b141-0a58ac1c1472 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:50:15.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-configmap-7pdnj" for this suite. +Dec 20 08:50:37.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:50:37.856: INFO: namespace: e2e-tests-configmap-7pdnj, resource: bindings, ignored listing per whitelist +Dec 20 08:50:37.936: INFO: namespace e2e-tests-configmap-7pdnj deletion completed in 22.210041677s + +• [SLOW TEST:28.456 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SS +------------------------------ +[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:50:37.936: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 +STEP: create the container to handle the HTTPGet hook request. +[It] should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: create the pod with lifecycle hook +STEP: delete the pod with lifecycle hook +Dec 20 08:50:48.123: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Dec 20 08:50:48.126: INFO: Pod pod-with-prestop-exec-hook still exists +Dec 20 08:50:50.126: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Dec 20 08:50:50.134: INFO: Pod pod-with-prestop-exec-hook still exists +Dec 20 08:50:52.126: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Dec 20 08:50:52.131: INFO: Pod pod-with-prestop-exec-hook still exists +Dec 20 08:50:54.126: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Dec 20 08:50:54.132: INFO: Pod pod-with-prestop-exec-hook still exists +Dec 20 08:50:56.126: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Dec 20 08:50:56.130: INFO: Pod pod-with-prestop-exec-hook still exists +Dec 20 08:50:58.126: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Dec 20 08:50:58.132: INFO: Pod pod-with-prestop-exec-hook still exists +Dec 20 08:51:00.126: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Dec 20 08:51:00.131: INFO: Pod pod-with-prestop-exec-hook still exists +Dec 20 08:51:02.126: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Dec 20 08:51:02.133: INFO: Pod pod-with-prestop-exec-hook still exists +Dec 20 08:51:04.127: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Dec 20 08:51:04.131: INFO: Pod pod-with-prestop-exec-hook still exists +Dec 20 08:51:06.126: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Dec 20 08:51:06.131: INFO: Pod pod-with-prestop-exec-hook still exists +Dec 20 08:51:08.126: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Dec 20 08:51:08.130: INFO: Pod pod-with-prestop-exec-hook no longer exists +STEP: check prestop hook +[AfterEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:51:08.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-tg6dr" for this suite. +Dec 20 08:51:30.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:51:30.288: INFO: namespace: e2e-tests-container-lifecycle-hook-tg6dr, resource: bindings, ignored listing per whitelist +Dec 20 08:51:30.356: INFO: namespace e2e-tests-container-lifecycle-hook-tg6dr deletion completed in 22.207232129s + +• [SLOW TEST:52.420 seconds] +[k8s.io] Container Lifecycle Hook +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + when create a pod with lifecycle hook + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class + should be submitted and removed [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] [sig-node] Pods Extended + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:51:30.359: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods Set QOS Class + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 +[It] should be submitted and removed [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying QOS class is set on the pod +[AfterEach] [k8s.io] [sig-node] Pods Extended + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Dec 20 08:51:30.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-pods-2rps7" for this suite. +Dec 20 08:51:52.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Dec 20 08:51:52.634: INFO: namespace: e2e-tests-pods-2rps7, resource: bindings, ignored listing per whitelist +Dec 20 08:51:52.773: INFO: namespace e2e-tests-pods-2rps7 deletion completed in 22.190455933s + +• [SLOW TEST:22.414 seconds] +[k8s.io] [sig-node] Pods Extended +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + [k8s.io] Pods Set QOS Class + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should be submitted and removed [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSS +------------------------------ +[sig-network] Proxy version v1 + should proxy logs on node using proxy subresource [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] version v1 + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Dec 20 08:51:52.773: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748 +STEP: Building a namespace api object, basename proxy +STEP: Waiting for a default service account to be provisioned in namespace +[It] should proxy logs on node using proxy subresource [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +Dec 20 08:51:52.922: INFO: (0) /api/v1/nodes/10-6-155-33/proxy/logs/:
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
+[BeforeEach] [k8s.io] Kubectl label
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
+STEP: creating the pod
+Dec 20 08:51:59.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 create -f - --namespace=e2e-tests-kubectl-g4vq4'
+Dec 20 08:51:59.868: INFO: stderr: ""
+Dec 20 08:51:59.868: INFO: stdout: "pod/pause created\n"
+Dec 20 08:51:59.868: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
+Dec 20 08:51:59.868: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-g4vq4" to be "running and ready"
+Dec 20 08:51:59.872: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.567656ms
+Dec 20 08:52:01.879: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010405675s
+Dec 20 08:52:03.888: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.019583212s
+Dec 20 08:52:03.888: INFO: Pod "pause" satisfied condition "running and ready"
+Dec 20 08:52:03.888: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
+[It] should update the label on a resource  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: adding the label testing-label with value testing-label-value to a pod
+Dec 20 08:52:03.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-g4vq4'
+Dec 20 08:52:04.111: INFO: stderr: ""
+Dec 20 08:52:04.111: INFO: stdout: "pod/pause labeled\n"
+STEP: verifying the pod has the label testing-label with the value testing-label-value
+Dec 20 08:52:04.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pod pause -L testing-label --namespace=e2e-tests-kubectl-g4vq4'
+Dec 20 08:52:04.249: INFO: stderr: ""
+Dec 20 08:52:04.249: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    testing-label-value\n"
+STEP: removing the label testing-label of a pod
+Dec 20 08:52:04.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 label pods pause testing-label- --namespace=e2e-tests-kubectl-g4vq4'
+Dec 20 08:52:04.440: INFO: stderr: ""
+Dec 20 08:52:04.440: INFO: stdout: "pod/pause labeled\n"
+STEP: verifying the pod doesn't have the label testing-label
+Dec 20 08:52:04.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pod pause -L testing-label --namespace=e2e-tests-kubectl-g4vq4'
+Dec 20 08:52:04.640: INFO: stderr: ""
+Dec 20 08:52:04.640: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    \n"
+[AfterEach] [k8s.io] Kubectl label
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
+STEP: using delete to clean up resources
+Dec 20 08:52:04.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-g4vq4'
+Dec 20 08:52:04.832: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Dec 20 08:52:04.832: INFO: stdout: "pod \"pause\" force deleted\n"
+Dec 20 08:52:04.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-g4vq4'
+Dec 20 08:52:05.011: INFO: stderr: "No resources found.\n"
+Dec 20 08:52:05.011: INFO: stdout: ""
+Dec 20 08:52:05.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 get pods -l name=pause --namespace=e2e-tests-kubectl-g4vq4 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
+Dec 20 08:52:05.187: INFO: stderr: ""
+Dec 20 08:52:05.187: INFO: stdout: ""
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 08:52:05.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-kubectl-g4vq4" for this suite.
+Dec 20 08:52:11.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 08:52:11.342: INFO: namespace: e2e-tests-kubectl-g4vq4, resource: bindings, ignored listing per whitelist
+Dec 20 08:52:11.385: INFO: namespace e2e-tests-kubectl-g4vq4 deletion completed in 6.188958973s
+
+• [SLOW TEST:12.093 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
+  [k8s.io] Kubectl label
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+    should update the label on a resource  [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (root,0777,default) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 08:52:11.386: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (root,0777,default) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test emptydir 0777 on node default medium
+Dec 20 08:52:11.547: INFO: Waiting up to 5m0s for pod "pod-8a3a3971-0434-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-emptydir-2kxs2" to be "success or failure"
+Dec 20 08:52:11.552: INFO: Pod "pod-8a3a3971-0434-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.524815ms
+Dec 20 08:52:13.562: INFO: Pod "pod-8a3a3971-0434-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014847991s
+Dec 20 08:52:15.595: INFO: Pod "pod-8a3a3971-0434-11e9-b141-0a58ac1c1472": Phase="Running", Reason="", readiness=true. Elapsed: 4.047326707s
+Dec 20 08:52:17.602: INFO: Pod "pod-8a3a3971-0434-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05457671s
+STEP: Saw pod success
+Dec 20 08:52:17.602: INFO: Pod "pod-8a3a3971-0434-11e9-b141-0a58ac1c1472" satisfied condition "success or failure"
+Dec 20 08:52:17.609: INFO: Trying to get logs from node 10-6-155-34 pod pod-8a3a3971-0434-11e9-b141-0a58ac1c1472 container test-container: 
+STEP: delete the pod
+Dec 20 08:52:17.662: INFO: Waiting for pod pod-8a3a3971-0434-11e9-b141-0a58ac1c1472 to disappear
+Dec 20 08:52:17.665: INFO: Pod pod-8a3a3971-0434-11e9-b141-0a58ac1c1472 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 08:52:17.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-emptydir-2kxs2" for this suite.
+Dec 20 08:52:23.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 08:52:23.744: INFO: namespace: e2e-tests-emptydir-2kxs2, resource: bindings, ignored listing per whitelist
+Dec 20 08:52:23.864: INFO: namespace e2e-tests-emptydir-2kxs2 deletion completed in 6.186878805s
+
+• [SLOW TEST:12.479 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
+  should support (root,0777,default) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-network] Proxy version v1 
+  should proxy through a service and a pod  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] version v1
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 08:52:23.865: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename proxy
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should proxy through a service and a pod  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: starting an echo server on multiple ports
+STEP: creating replication controller proxy-service-s5j65 in namespace e2e-tests-proxy-qhqmf
+I1220 08:52:24.052974      17 runners.go:184] Created replication controller with name: proxy-service-s5j65, namespace: e2e-tests-proxy-qhqmf, replica count: 1
+I1220 08:52:25.103599      17 runners.go:184] proxy-service-s5j65 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+I1220 08:52:26.103837      17 runners.go:184] proxy-service-s5j65 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+I1220 08:52:27.104705      17 runners.go:184] proxy-service-s5j65 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+I1220 08:52:28.104894      17 runners.go:184] proxy-service-s5j65 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+I1220 08:52:29.105159      17 runners.go:184] proxy-service-s5j65 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I1220 08:52:30.106089      17 runners.go:184] proxy-service-s5j65 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I1220 08:52:31.106609      17 runners.go:184] proxy-service-s5j65 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I1220 08:52:32.106884      17 runners.go:184] proxy-service-s5j65 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I1220 08:52:33.107336      17 runners.go:184] proxy-service-s5j65 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+Dec 20 08:52:33.111: INFO: setup took 9.081304582s, starting test cases
+STEP: running 16 cases, 20 attempts per case, 320 total attempts
+Dec 20 08:52:33.130: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-qhqmf/pods/proxy-service-s5j65-rshzg:1080/proxy/: >> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename containers
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test override command
+Dec 20 08:52:42.067: INFO: Waiting up to 5m0s for pod "client-containers-9c6a2cac-0434-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-containers-b2scd" to be "success or failure"
+Dec 20 08:52:42.071: INFO: Pod "client-containers-9c6a2cac-0434-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 3.96112ms
+Dec 20 08:52:44.078: INFO: Pod "client-containers-9c6a2cac-0434-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010828995s
+Dec 20 08:52:46.094: INFO: Pod "client-containers-9c6a2cac-0434-11e9-b141-0a58ac1c1472": Phase="Running", Reason="", readiness=true. Elapsed: 4.02694877s
+Dec 20 08:52:48.099: INFO: Pod "client-containers-9c6a2cac-0434-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032005295s
+STEP: Saw pod success
+Dec 20 08:52:48.099: INFO: Pod "client-containers-9c6a2cac-0434-11e9-b141-0a58ac1c1472" satisfied condition "success or failure"
+Dec 20 08:52:48.105: INFO: Trying to get logs from node 10-6-155-34 pod client-containers-9c6a2cac-0434-11e9-b141-0a58ac1c1472 container test-container: 
+STEP: delete the pod
+Dec 20 08:52:48.145: INFO: Waiting for pod client-containers-9c6a2cac-0434-11e9-b141-0a58ac1c1472 to disappear
+Dec 20 08:52:48.148: INFO: Pod client-containers-9c6a2cac-0434-11e9-b141-0a58ac1c1472 no longer exists
+[AfterEach] [k8s.io] Docker Containers
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 08:52:48.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-containers-b2scd" for this suite.
+Dec 20 08:52:54.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 08:52:54.371: INFO: namespace: e2e-tests-containers-b2scd, resource: bindings, ignored listing per whitelist
+Dec 20 08:52:54.385: INFO: namespace e2e-tests-containers-b2scd deletion completed in 6.23071675s
+
+• [SLOW TEST:12.458 seconds]
+[k8s.io] Docker Containers
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 08:52:54.385: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating secret with name secret-test-a3dd6216-0434-11e9-b141-0a58ac1c1472
+STEP: Creating a pod to test consume secrets
+Dec 20 08:52:54.569: INFO: Waiting up to 5m0s for pod "pod-secrets-a3de3610-0434-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-secrets-g2jhs" to be "success or failure"
+Dec 20 08:52:54.574: INFO: Pod "pod-secrets-a3de3610-0434-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.980693ms
+Dec 20 08:52:56.580: INFO: Pod "pod-secrets-a3de3610-0434-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010795717s
+Dec 20 08:52:58.586: INFO: Pod "pod-secrets-a3de3610-0434-11e9-b141-0a58ac1c1472": Phase="Running", Reason="", readiness=true. Elapsed: 4.016848353s
+Dec 20 08:53:00.593: INFO: Pod "pod-secrets-a3de3610-0434-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024587191s
+STEP: Saw pod success
+Dec 20 08:53:00.594: INFO: Pod "pod-secrets-a3de3610-0434-11e9-b141-0a58ac1c1472" satisfied condition "success or failure"
+Dec 20 08:53:00.597: INFO: Trying to get logs from node 10-6-155-34 pod pod-secrets-a3de3610-0434-11e9-b141-0a58ac1c1472 container secret-volume-test: 
+STEP: delete the pod
+Dec 20 08:53:00.617: INFO: Waiting for pod pod-secrets-a3de3610-0434-11e9-b141-0a58ac1c1472 to disappear
+Dec 20 08:53:00.622: INFO: Pod pod-secrets-a3de3610-0434-11e9-b141-0a58ac1c1472 no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 08:53:00.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-secrets-g2jhs" for this suite.
+Dec 20 08:53:06.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 08:53:06.658: INFO: namespace: e2e-tests-secrets-g2jhs, resource: bindings, ignored listing per whitelist
+Dec 20 08:53:06.793: INFO: namespace e2e-tests-secrets-g2jhs deletion completed in 6.165511372s
+
+• [SLOW TEST:12.408 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 08:53:06.794: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
+[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test downward API volume plugin
+Dec 20 08:53:06.981: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab3f3ab4-0434-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-downward-api-rd6jd" to be "success or failure"
+Dec 20 08:53:07.070: INFO: Pod "downwardapi-volume-ab3f3ab4-0434-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 89.071084ms
+Dec 20 08:53:09.077: INFO: Pod "downwardapi-volume-ab3f3ab4-0434-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095752004s
+Dec 20 08:53:11.082: INFO: Pod "downwardapi-volume-ab3f3ab4-0434-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10073133s
+Dec 20 08:53:13.086: INFO: Pod "downwardapi-volume-ab3f3ab4-0434-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.105107929s
+STEP: Saw pod success
+Dec 20 08:53:13.086: INFO: Pod "downwardapi-volume-ab3f3ab4-0434-11e9-b141-0a58ac1c1472" satisfied condition "success or failure"
+Dec 20 08:53:13.090: INFO: Trying to get logs from node 10-6-155-34 pod downwardapi-volume-ab3f3ab4-0434-11e9-b141-0a58ac1c1472 container client-container: 
+STEP: delete the pod
+Dec 20 08:53:13.123: INFO: Waiting for pod downwardapi-volume-ab3f3ab4-0434-11e9-b141-0a58ac1c1472 to disappear
+Dec 20 08:53:13.127: INFO: Pod downwardapi-volume-ab3f3ab4-0434-11e9-b141-0a58ac1c1472 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 08:53:13.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-downward-api-rd6jd" for this suite.
+Dec 20 08:53:19.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 08:53:19.224: INFO: namespace: e2e-tests-downward-api-rd6jd, resource: bindings, ignored listing per whitelist
+Dec 20 08:53:19.317: INFO: namespace e2e-tests-downward-api-rd6jd deletion completed in 6.182645968s
+
+• [SLOW TEST:12.523 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
+  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should set DefaultMode on files [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 08:53:19.317: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should set DefaultMode on files [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test downward API volume plugin
+Dec 20 08:53:19.481: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b2b82339-0434-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-8822q" to be "success or failure"
+Dec 20 08:53:19.485: INFO: Pod "downwardapi-volume-b2b82339-0434-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181235ms
+Dec 20 08:53:21.489: INFO: Pod "downwardapi-volume-b2b82339-0434-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008476103s
+Dec 20 08:53:23.493: INFO: Pod "downwardapi-volume-b2b82339-0434-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012428473s
+Dec 20 08:53:25.503: INFO: Pod "downwardapi-volume-b2b82339-0434-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021744487s
+STEP: Saw pod success
+Dec 20 08:53:25.503: INFO: Pod "downwardapi-volume-b2b82339-0434-11e9-b141-0a58ac1c1472" satisfied condition "success or failure"
+Dec 20 08:53:25.508: INFO: Trying to get logs from node 10-6-155-34 pod downwardapi-volume-b2b82339-0434-11e9-b141-0a58ac1c1472 container client-container: 
+STEP: delete the pod
+Dec 20 08:53:25.546: INFO: Waiting for pod downwardapi-volume-b2b82339-0434-11e9-b141-0a58ac1c1472 to disappear
+Dec 20 08:53:25.566: INFO: Pod downwardapi-volume-b2b82339-0434-11e9-b141-0a58ac1c1472 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 08:53:25.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-8822q" for this suite.
+Dec 20 08:53:31.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 08:53:31.727: INFO: namespace: e2e-tests-projected-8822q, resource: bindings, ignored listing per whitelist
+Dec 20 08:53:31.777: INFO: namespace e2e-tests-projected-8822q deletion completed in 6.19835706s
+
+• [SLOW TEST:12.460 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should set DefaultMode on files [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSS
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 08:53:31.777: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap with name projected-configmap-test-volume-ba219154-0434-11e9-b141-0a58ac1c1472
+STEP: Creating a pod to test consume configMaps
+Dec 20 08:53:31.924: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ba22b8b1-0434-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-f68vv" to be "success or failure"
+Dec 20 08:53:31.930: INFO: Pod "pod-projected-configmaps-ba22b8b1-0434-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 5.852575ms
+Dec 20 08:53:33.935: INFO: Pod "pod-projected-configmaps-ba22b8b1-0434-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010208493s
+Dec 20 08:53:35.940: INFO: Pod "pod-projected-configmaps-ba22b8b1-0434-11e9-b141-0a58ac1c1472": Phase="Running", Reason="", readiness=true. Elapsed: 4.015770384s
+Dec 20 08:53:37.950: INFO: Pod "pod-projected-configmaps-ba22b8b1-0434-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025710424s
+STEP: Saw pod success
+Dec 20 08:53:37.950: INFO: Pod "pod-projected-configmaps-ba22b8b1-0434-11e9-b141-0a58ac1c1472" satisfied condition "success or failure"
+Dec 20 08:53:37.959: INFO: Trying to get logs from node 10-6-155-34 pod pod-projected-configmaps-ba22b8b1-0434-11e9-b141-0a58ac1c1472 container projected-configmap-volume-test: 
+STEP: delete the pod
+Dec 20 08:53:37.987: INFO: Waiting for pod pod-projected-configmaps-ba22b8b1-0434-11e9-b141-0a58ac1c1472 to disappear
+Dec 20 08:53:37.990: INFO: Pod pod-projected-configmaps-ba22b8b1-0434-11e9-b141-0a58ac1c1472 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 08:53:37.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-f68vv" for this suite.
+Dec 20 08:53:44.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 08:53:44.133: INFO: namespace: e2e-tests-projected-f68vv, resource: bindings, ignored listing per whitelist
+Dec 20 08:53:44.193: INFO: namespace e2e-tests-projected-f68vv deletion completed in 6.194965584s
+
+• [SLOW TEST:12.416 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
+  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should update labels on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 08:53:44.195: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
+[It] should update labels on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating the pod
+Dec 20 08:53:50.890: INFO: Successfully updated pod "labelsupdatec1878f62-0434-11e9-b141-0a58ac1c1472"
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 08:53:52.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-downward-api-9xmhl" for this suite.
+Dec 20 08:54:14.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 08:54:14.966: INFO: namespace: e2e-tests-downward-api-9xmhl, resource: bindings, ignored listing per whitelist
+Dec 20 08:54:15.167: INFO: namespace e2e-tests-downward-api-9xmhl deletion completed in 22.241802233s
+
+• [SLOW TEST:30.973 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
+  should update labels on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 08:54:15.168: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating secret with name secret-test-map-d40e5266-0434-11e9-b141-0a58ac1c1472
+STEP: Creating a pod to test consume secrets
+Dec 20 08:54:15.424: INFO: Waiting up to 5m0s for pod "pod-secrets-d40f2fc9-0434-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-secrets-9wmdj" to be "success or failure"
+Dec 20 08:54:15.431: INFO: Pod "pod-secrets-d40f2fc9-0434-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 7.108442ms
+Dec 20 08:54:17.436: INFO: Pod "pod-secrets-d40f2fc9-0434-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012306256s
+Dec 20 08:54:19.450: INFO: Pod "pod-secrets-d40f2fc9-0434-11e9-b141-0a58ac1c1472": Phase="Running", Reason="", readiness=true. Elapsed: 4.026350676s
+Dec 20 08:54:21.455: INFO: Pod "pod-secrets-d40f2fc9-0434-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031175818s
+STEP: Saw pod success
+Dec 20 08:54:21.455: INFO: Pod "pod-secrets-d40f2fc9-0434-11e9-b141-0a58ac1c1472" satisfied condition "success or failure"
+Dec 20 08:54:21.458: INFO: Trying to get logs from node 10-6-155-34 pod pod-secrets-d40f2fc9-0434-11e9-b141-0a58ac1c1472 container secret-volume-test: 
+STEP: delete the pod
+Dec 20 08:54:21.483: INFO: Waiting for pod pod-secrets-d40f2fc9-0434-11e9-b141-0a58ac1c1472 to disappear
+Dec 20 08:54:21.488: INFO: Pod pod-secrets-d40f2fc9-0434-11e9-b141-0a58ac1c1472 no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 08:54:21.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-secrets-9wmdj" for this suite.
+Dec 20 08:54:27.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 08:54:27.672: INFO: namespace: e2e-tests-secrets-9wmdj, resource: bindings, ignored listing per whitelist
+Dec 20 08:54:27.787: INFO: namespace e2e-tests-secrets-9wmdj deletion completed in 6.291633958s
+
+• [SLOW TEST:12.619 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 08:54:27.787: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap with name configmap-test-volume-map-db8b4c48-0434-11e9-b141-0a58ac1c1472
+STEP: Creating a pod to test consume configMaps
+Dec 20 08:54:27.986: INFO: Waiting up to 5m0s for pod "pod-configmaps-db8c817d-0434-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-configmap-jrkb5" to be "success or failure"
+Dec 20 08:54:27.997: INFO: Pod "pod-configmaps-db8c817d-0434-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 10.649205ms
+Dec 20 08:54:30.011: INFO: Pod "pod-configmaps-db8c817d-0434-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02513157s
+Dec 20 08:54:32.021: INFO: Pod "pod-configmaps-db8c817d-0434-11e9-b141-0a58ac1c1472": Phase="Running", Reason="", readiness=true. Elapsed: 4.034907013s
+Dec 20 08:54:34.027: INFO: Pod "pod-configmaps-db8c817d-0434-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040294862s
+STEP: Saw pod success
+Dec 20 08:54:34.027: INFO: Pod "pod-configmaps-db8c817d-0434-11e9-b141-0a58ac1c1472" satisfied condition "success or failure"
+Dec 20 08:54:34.030: INFO: Trying to get logs from node 10-6-155-34 pod pod-configmaps-db8c817d-0434-11e9-b141-0a58ac1c1472 container configmap-volume-test: 
+STEP: delete the pod
+Dec 20 08:54:34.059: INFO: Waiting for pod pod-configmaps-db8c817d-0434-11e9-b141-0a58ac1c1472 to disappear
+Dec 20 08:54:34.063: INFO: Pod pod-configmaps-db8c817d-0434-11e9-b141-0a58ac1c1472 no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 08:54:34.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-configmap-jrkb5" for this suite.
+Dec 20 08:54:40.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 08:54:40.170: INFO: namespace: e2e-tests-configmap-jrkb5, resource: bindings, ignored listing per whitelist
+Dec 20 08:54:40.257: INFO: namespace e2e-tests-configmap-jrkb5 deletion completed in 6.183975611s
+
+• [SLOW TEST:12.470 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
+  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SS
+------------------------------
+[k8s.io] Pods 
+  should contain environment variables for services [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 08:54:40.257: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
+[It] should contain environment variables for services [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+Dec 20 08:54:44.502: INFO: Waiting up to 5m0s for pod "client-envvars-e56579fc-0434-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-pods-frqrp" to be "success or failure"
+Dec 20 08:54:44.510: INFO: Pod "client-envvars-e56579fc-0434-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 7.442387ms
+Dec 20 08:54:46.514: INFO: Pod "client-envvars-e56579fc-0434-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012328294s
+Dec 20 08:54:48.521: INFO: Pod "client-envvars-e56579fc-0434-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018770132s
+STEP: Saw pod success
+Dec 20 08:54:48.521: INFO: Pod "client-envvars-e56579fc-0434-11e9-b141-0a58ac1c1472" satisfied condition "success or failure"
+Dec 20 08:54:48.526: INFO: Trying to get logs from node 10-6-155-34 pod client-envvars-e56579fc-0434-11e9-b141-0a58ac1c1472 container env3cont: 
+STEP: delete the pod
+Dec 20 08:54:48.553: INFO: Waiting for pod client-envvars-e56579fc-0434-11e9-b141-0a58ac1c1472 to disappear
+Dec 20 08:54:48.558: INFO: Pod client-envvars-e56579fc-0434-11e9-b141-0a58ac1c1472 no longer exists
+[AfterEach] [k8s.io] Pods
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 08:54:48.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-pods-frqrp" for this suite.
+Dec 20 08:55:30.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 08:55:30.617: INFO: namespace: e2e-tests-pods-frqrp, resource: bindings, ignored listing per whitelist
+Dec 20 08:55:30.781: INFO: namespace e2e-tests-pods-frqrp deletion completed in 42.204035714s
+
+• [SLOW TEST:50.524 seconds]
+[k8s.io] Pods
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  should contain environment variables for services [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SS
+------------------------------
+[sig-network] DNS 
+  should provide DNS for the cluster  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-network] DNS
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 08:55:30.782: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename dns
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide DNS for the cluster  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-94hdg.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-94hdg.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-94hdg.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done
+
+STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-94hdg.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-94hdg.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-94hdg.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done
+
+STEP: creating a pod to probe DNS
+STEP: submitting the pod to kubernetes
+STEP: retrieving the pod
+STEP: looking for the results for each expected name from probers
+Dec 20 08:55:39.199: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-94hdg.svc.cluster.local from pod e2e-tests-dns-94hdg/dns-test-01114e11-0435-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-01114e11-0435-11e9-b141-0a58ac1c1472)
+Dec 20 08:55:39.207: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-94hdg/dns-test-01114e11-0435-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-01114e11-0435-11e9-b141-0a58ac1c1472)
+Dec 20 08:55:39.212: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-94hdg/dns-test-01114e11-0435-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-01114e11-0435-11e9-b141-0a58ac1c1472)
+Dec 20 08:55:39.223: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-94hdg/dns-test-01114e11-0435-11e9-b141-0a58ac1c1472: the server could not find the requested resource (get pods dns-test-01114e11-0435-11e9-b141-0a58ac1c1472)
+Dec 20 08:55:39.223: INFO: Lookups using e2e-tests-dns-94hdg/dns-test-01114e11-0435-11e9-b141-0a58ac1c1472 failed for: [jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-94hdg.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]
+
+Dec 20 08:55:44.484: INFO: DNS probes using e2e-tests-dns-94hdg/dns-test-01114e11-0435-11e9-b141-0a58ac1c1472 succeeded
+
+STEP: deleting the pod
+[AfterEach] [sig-network] DNS
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 08:55:44.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-dns-94hdg" for this suite.
+Dec 20 08:55:50.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 08:55:50.764: INFO: namespace: e2e-tests-dns-94hdg, resource: bindings, ignored listing per whitelist
+Dec 20 08:55:50.831: INFO: namespace e2e-tests-dns-94hdg deletion completed in 6.30532792s
+
+• [SLOW TEST:20.050 seconds]
+[sig-network] DNS
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
+  should provide DNS for the cluster  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSS
+------------------------------
+[sig-storage] Subpath Atomic writer volumes 
+  should support subpaths with projected pod [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Subpath
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 08:55:50.832: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename subpath
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] Atomic writer volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
+STEP: Setting up data
+[It] should support subpaths with projected pod [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating pod pod-subpath-test-projected-zjvg
+STEP: Creating a pod to test atomic-volume-subpath
+Dec 20 08:55:51.044: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-zjvg" in namespace "e2e-tests-subpath-4hftw" to be "success or failure"
+Dec 20 08:55:51.063: INFO: Pod "pod-subpath-test-projected-zjvg": Phase="Pending", Reason="", readiness=false. Elapsed: 18.562883ms
+Dec 20 08:55:53.074: INFO: Pod "pod-subpath-test-projected-zjvg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030412142s
+Dec 20 08:55:55.084: INFO: Pod "pod-subpath-test-projected-zjvg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039591803s
+Dec 20 08:55:57.095: INFO: Pod "pod-subpath-test-projected-zjvg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050726034s
+STEP: Saw pod success
+Dec 20 08:56:19.179: INFO: Pod "pod-subpath-test-projected-zjvg" satisfied condition "success or failure"
+Dec 20 08:56:19.184: INFO: Trying to get logs from node 10-6-155-34 pod pod-subpath-test-projected-zjvg container test-container-subpath-projected-zjvg: 
+STEP: delete the pod
+Dec 20 08:56:19.216: INFO: Waiting for pod pod-subpath-test-projected-zjvg to disappear
+Dec 20 08:56:19.229: INFO: Pod pod-subpath-test-projected-zjvg no longer exists
+STEP: Deleting pod pod-subpath-test-projected-zjvg
+Dec 20 08:56:19.229: INFO: Deleting pod "pod-subpath-test-projected-zjvg" in namespace "e2e-tests-subpath-4hftw"
+[AfterEach] [sig-storage] Subpath
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 08:56:19.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-subpath-4hftw" for this suite.
+Dec 20 08:56:25.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 08:56:25.293: INFO: namespace: e2e-tests-subpath-4hftw, resource: bindings, ignored listing per whitelist
+Dec 20 08:56:25.428: INFO: namespace e2e-tests-subpath-4hftw deletion completed in 6.190260244s
+
+• [SLOW TEST:34.597 seconds]
+[sig-storage] Subpath
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
+  Atomic writer volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
+    should support subpaths with projected pod [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SS
+------------------------------
+[sig-storage] Subpath Atomic writer volumes 
+  should support subpaths with downward pod [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Subpath
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 08:56:25.428: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename subpath
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] Atomic writer volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
+STEP: Setting up data
+[It] should support subpaths with downward pod [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating pod pod-subpath-test-downwardapi-bk9d
+STEP: Creating a pod to test atomic-volume-subpath
+Dec 20 08:56:25.561: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-bk9d" in namespace "e2e-tests-subpath-h8hpd" to be "success or failure"
+Dec 20 08:56:25.566: INFO: Pod "pod-subpath-test-downwardapi-bk9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.398493ms
+Dec 20 08:56:27.571: INFO: Pod "pod-subpath-test-downwardapi-bk9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010246699s
+Dec 20 08:56:29.578: INFO: Pod "pod-subpath-test-downwardapi-bk9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016888766s
+Dec 20 08:56:31.584: INFO: Pod "pod-subpath-test-downwardapi-bk9d": Phase="Pending", Reason="", readiness=false. Elapsed: 28.090090683s
+STEP: Saw pod success
+Dec 20 08:56:53.651: INFO: Pod "pod-subpath-test-downwardapi-bk9d" satisfied condition "success or failure"
+Dec 20 08:56:53.656: INFO: Trying to get logs from node 10-6-155-34 pod pod-subpath-test-downwardapi-bk9d container test-container-subpath-downwardapi-bk9d: 
+STEP: delete the pod
+Dec 20 08:56:53.681: INFO: Waiting for pod pod-subpath-test-downwardapi-bk9d to disappear
+Dec 20 08:56:53.688: INFO: Pod pod-subpath-test-downwardapi-bk9d no longer exists
+STEP: Deleting pod pod-subpath-test-downwardapi-bk9d
+Dec 20 08:56:53.688: INFO: Deleting pod "pod-subpath-test-downwardapi-bk9d" in namespace "e2e-tests-subpath-h8hpd"
+[AfterEach] [sig-storage] Subpath
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 08:56:53.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-subpath-h8hpd" for this suite.
+Dec 20 08:56:59.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 08:56:59.815: INFO: namespace: e2e-tests-subpath-h8hpd, resource: bindings, ignored listing per whitelist
+Dec 20 08:56:59.889: INFO: namespace e2e-tests-subpath-h8hpd deletion completed in 6.187265796s
+
+• [SLOW TEST:34.461 seconds]
+[sig-storage] Subpath
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
+  Atomic writer volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
+    should support subpaths with downward pod [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[k8s.io] Probing container 
+  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 08:56:59.890: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename container-probe
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
+[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[AfterEach] [k8s.io] Probing container
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 08:58:00.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-container-probe-j8jp5" for this suite.
+Dec 20 08:58:22.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 08:58:22.164: INFO: namespace: e2e-tests-container-probe-j8jp5, resource: bindings, ignored listing per whitelist
+Dec 20 08:58:22.170: INFO: namespace e2e-tests-container-probe-j8jp5 deletion completed in 22.151698812s
+
+• [SLOW TEST:82.280 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[sig-network] Services 
+  should serve multiport endpoints from pods  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 08:58:22.171: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename services
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
+[It] should serve multiport endpoints from pods  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: creating service multi-endpoint-test in namespace e2e-tests-services-96cd5
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-96cd5 to expose endpoints map[]
+Dec 20 08:58:22.292: INFO: Get endpoints failed (10.080406ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
+Dec 20 08:58:23.297: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-96cd5 exposes endpoints map[] (1.015018344s elapsed)
+STEP: Creating pod pod1 in namespace e2e-tests-services-96cd5
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-96cd5 to expose endpoints map[pod1:[100]]
+Dec 20 08:58:27.372: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.065054267s elapsed, will retry)
+Dec 20 08:58:28.381: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-96cd5 exposes endpoints map[pod1:[100]] (5.073680879s elapsed)
+STEP: Creating pod pod2 in namespace e2e-tests-services-96cd5
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-96cd5 to expose endpoints map[pod1:[100] pod2:[101]]
+Dec 20 08:58:32.514: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-96cd5 exposes endpoints map[pod1:[100] pod2:[101]] (4.126778877s elapsed)
+STEP: Deleting pod pod1 in namespace e2e-tests-services-96cd5
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-96cd5 to expose endpoints map[pod2:[101]]
+Dec 20 08:58:33.564: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-96cd5 exposes endpoints map[pod2:[101]] (1.035675066s elapsed)
+STEP: Deleting pod pod2 in namespace e2e-tests-services-96cd5
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-96cd5 to expose endpoints map[]
+Dec 20 08:58:34.597: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-96cd5 exposes endpoints map[] (1.024831988s elapsed)
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 08:58:34.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-services-96cd5" for this suite.
+Dec 20 08:58:40.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 08:58:40.700: INFO: namespace: e2e-tests-services-96cd5, resource: bindings, ignored listing per whitelist
+Dec 20 08:58:40.817: INFO: namespace e2e-tests-services-96cd5 deletion completed in 6.188948252s
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90
+
+• [SLOW TEST:18.646 seconds]
+[sig-network] Services
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
+  should serve multiport endpoints from pods  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+S
+------------------------------
+[sig-storage] EmptyDir wrapper volumes 
+  should not conflict [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] EmptyDir wrapper volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 08:58:40.817: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename emptydir-wrapper
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should not conflict [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Cleaning up the secret
+STEP: Cleaning up the configmap
+STEP: Cleaning up the pod
+[AfterEach] [sig-storage] EmptyDir wrapper volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 08:58:45.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-emptydir-wrapper-gmcxr" for this suite.
+Dec 20 08:58:51.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 08:58:51.208: INFO: namespace: e2e-tests-emptydir-wrapper-gmcxr, resource: bindings, ignored listing per whitelist
+Dec 20 08:58:51.353: INFO: namespace e2e-tests-emptydir-wrapper-gmcxr deletion completed in 6.244332316s
+
+• [SLOW TEST:10.536 seconds]
+[sig-storage] EmptyDir wrapper volumes
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
+  should not conflict [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SS
+------------------------------
+[sig-auth] ServiceAccounts 
+  should mount an API token into pods  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-auth] ServiceAccounts
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 08:58:51.353: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename svcaccounts
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should mount an API token into pods  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: getting the auto-created API token
+STEP: Creating a pod to test consume service account token
+Dec 20 08:58:52.098: INFO: Waiting up to 5m0s for pod "pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-t5krl" in namespace "e2e-tests-svcaccounts-79cvd" to be "success or failure"
+Dec 20 08:58:52.105: INFO: Pod "pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-t5krl": Phase="Pending", Reason="", readiness=false. Elapsed: 7.050322ms
+Dec 20 08:58:54.110: INFO: Pod "pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-t5krl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012401531s
+Dec 20 08:58:56.131: INFO: Pod "pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-t5krl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03304749s
+Dec 20 08:58:58.135: INFO: Pod "pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-t5krl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037029421s
+Dec 20 08:59:00.147: INFO: Pod "pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-t5krl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048830555s
+STEP: Saw pod success
+Dec 20 08:59:00.147: INFO: Pod "pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-t5krl" satisfied condition "success or failure"
+Dec 20 08:59:00.161: INFO: Trying to get logs from node 10-6-155-34 pod pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-t5krl container token-test: 
+STEP: delete the pod
+Dec 20 08:59:00.231: INFO: Waiting for pod pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-t5krl to disappear
+Dec 20 08:59:00.239: INFO: Pod pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-t5krl no longer exists
+STEP: Creating a pod to test consume service account root CA
+Dec 20 08:59:00.264: INFO: Waiting up to 5m0s for pod "pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-tts2w" in namespace "e2e-tests-svcaccounts-79cvd" to be "success or failure"
+Dec 20 08:59:00.280: INFO: Pod "pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-tts2w": Phase="Pending", Reason="", readiness=false. Elapsed: 15.930994ms
+Dec 20 08:59:02.285: INFO: Pod "pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-tts2w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021445302s
+Dec 20 08:59:04.305: INFO: Pod "pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-tts2w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041667902s
+Dec 20 08:59:06.314: INFO: Pod "pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-tts2w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050293297s
+Dec 20 08:59:08.322: INFO: Pod "pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-tts2w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058266322s
+STEP: Saw pod success
+Dec 20 08:59:08.322: INFO: Pod "pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-tts2w" satisfied condition "success or failure"
+Dec 20 08:59:08.335: INFO: Trying to get logs from node 10-6-155-34 pod pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-tts2w container root-ca-test: 
+STEP: delete the pod
+Dec 20 08:59:08.373: INFO: Waiting for pod pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-tts2w to disappear
+Dec 20 08:59:08.376: INFO: Pod pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-tts2w no longer exists
+STEP: Creating a pod to test consume service account namespace
+Dec 20 08:59:08.383: INFO: Waiting up to 5m0s for pod "pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-nl8qz" in namespace "e2e-tests-svcaccounts-79cvd" to be "success or failure"
+Dec 20 08:59:08.391: INFO: Pod "pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-nl8qz": Phase="Pending", Reason="", readiness=false. Elapsed: 7.060638ms
+Dec 20 08:59:10.431: INFO: Pod "pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-nl8qz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047855122s
+Dec 20 08:59:12.441: INFO: Pod "pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-nl8qz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057409042s
+Dec 20 08:59:14.455: INFO: Pod "pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-nl8qz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071848792s
+Dec 20 08:59:16.466: INFO: Pod "pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-nl8qz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.082760406s
+STEP: Saw pod success
+Dec 20 08:59:16.466: INFO: Pod "pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-nl8qz" satisfied condition "success or failure"
+Dec 20 08:59:16.475: INFO: Trying to get logs from node 10-6-155-34 pod pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-nl8qz container namespace-test: 
+STEP: delete the pod
+Dec 20 08:59:16.517: INFO: Waiting for pod pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-nl8qz to disappear
+Dec 20 08:59:16.522: INFO: Pod pod-service-account-78f87688-0435-11e9-b141-0a58ac1c1472-nl8qz no longer exists
+[AfterEach] [sig-auth] ServiceAccounts
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 08:59:16.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-svcaccounts-79cvd" for this suite.
+Dec 20 08:59:22.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 08:59:22.755: INFO: namespace: e2e-tests-svcaccounts-79cvd, resource: bindings, ignored listing per whitelist
+Dec 20 08:59:22.819: INFO: namespace e2e-tests-svcaccounts-79cvd deletion completed in 6.278376001s
+
+• [SLOW TEST:31.466 seconds]
+[sig-auth] ServiceAccounts
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
+  should mount an API token into pods  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[sig-api-machinery] Watchers 
+  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 08:59:22.819: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename watch
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: creating a watch on configmaps
+STEP: creating a new configmap
+STEP: modifying the configmap once
+STEP: closing the watch once it receives two notifications
+Dec 20 08:59:22.980: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-lsddk,SelfLink:/api/v1/namespaces/e2e-tests-watch-lsddk/configmaps/e2e-watch-test-watch-closed,UID:8b5f4dfd-0435-11e9-b07b-0242ac120004,ResourceVersion:969400,Generation:0,CreationTimestamp:2018-12-20 08:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
+Dec 20 08:59:22.980: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-lsddk,SelfLink:/api/v1/namespaces/e2e-tests-watch-lsddk/configmaps/e2e-watch-test-watch-closed,UID:8b5f4dfd-0435-11e9-b07b-0242ac120004,ResourceVersion:969401,Generation:0,CreationTimestamp:2018-12-20 08:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
+STEP: modifying the configmap a second time, while the watch is closed
+STEP: creating a new watch on configmaps from the last resource version observed by the first watch
+STEP: deleting the configmap
+STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
+Dec 20 08:59:23.007: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-lsddk,SelfLink:/api/v1/namespaces/e2e-tests-watch-lsddk/configmaps/e2e-watch-test-watch-closed,UID:8b5f4dfd-0435-11e9-b07b-0242ac120004,ResourceVersion:969402,Generation:0,CreationTimestamp:2018-12-20 08:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
+Dec 20 08:59:23.008: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-lsddk,SelfLink:/api/v1/namespaces/e2e-tests-watch-lsddk/configmaps/e2e-watch-test-watch-closed,UID:8b5f4dfd-0435-11e9-b07b-0242ac120004,ResourceVersion:969403,Generation:0,CreationTimestamp:2018-12-20 08:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
+[AfterEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 08:59:23.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-watch-lsddk" for this suite.
+Dec 20 08:59:29.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 08:59:29.108: INFO: namespace: e2e-tests-watch-lsddk, resource: bindings, ignored listing per whitelist
+Dec 20 08:59:29.199: INFO: namespace e2e-tests-watch-lsddk deletion completed in 6.177942357s
+
+• [SLOW TEST:6.379 seconds]
+[sig-api-machinery] Watchers
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
+  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSS
+------------------------------
+[sig-scheduling] SchedulerPredicates [Serial] 
+  validates that NodeSelector is respected if matching  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 08:59:29.199: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename sched-pred
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
+Dec 20 08:59:29.314: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
+Dec 20 08:59:29.327: INFO: Waiting for terminating namespaces to be deleted...
+Dec 20 08:59:29.332: INFO: 
+Logging pods the kubelet thinks is on node 10-6-155-33 before test
+Dec 20 08:59:29.352: INFO: coredns-87987d698-4brj5 from kube-system started at 2018-12-17 03:35:16 +0000 UTC (1 container statuses recorded)
+Dec 20 08:59:29.352: INFO: 	Container coredns ready: true, restart count 0
+Dec 20 08:59:29.352: INFO: calico-kube-controllers-5dd6c6f8bc-4xfk4 from kube-system started at 2018-12-17 03:35:16 +0000 UTC (1 container statuses recorded)
+Dec 20 08:59:29.352: INFO: 	Container calico-kube-controllers ready: true, restart count 0
+Dec 20 08:59:29.352: INFO: wordpress-wordpress-97f5cbb67-6j958 from default started at 2018-12-17 03:35:16 +0000 UTC (1 container statuses recorded)
+Dec 20 08:59:29.352: INFO: 	Container wordpress-wordpress ready: true, restart count 0
+Dec 20 08:59:29.352: INFO: coredns-87987d698-55xbs from kube-system started at 2018-12-13 03:08:41 +0000 UTC (1 container statuses recorded)
+Dec 20 08:59:29.352: INFO: 	Container coredns ready: true, restart count 1
+Dec 20 08:59:29.352: INFO: calico-node-lbxlp from kube-system started at 2018-12-20 07:15:25 +0000 UTC (2 container statuses recorded)
+Dec 20 08:59:29.352: INFO: 	Container calico-node ready: true, restart count 0
+Dec 20 08:59:29.352: INFO: 	Container install-cni ready: true, restart count 0
+Dec 20 08:59:29.352: INFO: wordpress-wordpress-mysql-75d5f8f644-tbzfh from default started at 2018-12-13 03:19:52 +0000 UTC (1 container statuses recorded)
+Dec 20 08:59:29.352: INFO: 	Container wordpress-mysql ready: true, restart count 1
+Dec 20 08:59:29.352: INFO: kube-proxy-84x26 from kube-system started at 2018-12-20 07:15:33 +0000 UTC (1 container statuses recorded)
+Dec 20 08:59:29.352: INFO: 	Container kube-proxy ready: true, restart count 0
+Dec 20 08:59:29.352: INFO: d2048-2048-7b95b48c9b-n6hqw from default started at 2018-12-20 07:19:05 +0000 UTC (1 container statuses recorded)
+Dec 20 08:59:29.352: INFO: 	Container d2048-2048 ready: true, restart count 0
+Dec 20 08:59:29.352: INFO: smokeping-sb4jz from kube-system started at 2018-12-13 03:01:41 +0000 UTC (1 container statuses recorded)
+Dec 20 08:59:29.352: INFO: 	Container smokeping ready: true, restart count 5
+Dec 20 08:59:29.352: INFO: 
+Logging pods the kubelet thinks is on node 10-6-155-34 before test
+Dec 20 08:59:29.370: INFO: kube-proxy-m94wf from kube-system started at 2018-12-20 07:15:39 +0000 UTC (1 container statuses recorded)
+Dec 20 08:59:29.370: INFO: 	Container kube-proxy ready: true, restart count 0
+Dec 20 08:59:29.370: INFO: sonobuoy from heptio-sonobuoy started at 2018-12-20 07:21:15 +0000 UTC (3 container statuses recorded)
+Dec 20 08:59:29.370: INFO: 	Container cleanup ready: true, restart count 0
+Dec 20 08:59:29.370: INFO: 	Container forwarder ready: true, restart count 0
+Dec 20 08:59:29.370: INFO: 	Container kube-sonobuoy ready: true, restart count 0
+Dec 20 08:59:29.370: INFO: calico-node-mz7bv from kube-system started at 2018-12-20 07:15:25 +0000 UTC (2 container statuses recorded)
+Dec 20 08:59:29.370: INFO: 	Container calico-node ready: true, restart count 0
+Dec 20 08:59:29.370: INFO: 	Container install-cni ready: true, restart count 0
+Dec 20 08:59:29.370: INFO: sonobuoy-e2e-job-b25697b233924eae from heptio-sonobuoy started at 2018-12-20 07:21:27 +0000 UTC (2 container statuses recorded)
+Dec 20 08:59:29.370: INFO: 	Container e2e ready: true, restart count 0
+Dec 20 08:59:29.370: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+[It] validates that NodeSelector is respected if matching  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Trying to launch a pod without a label to get a node which can launch it.
+STEP: Explicitly delete pod here to free the resource it takes.
+STEP: Trying to apply a random label on the found node.
+STEP: verifying the node has the label kubernetes.io/e2e-919cf90a-0435-11e9-b141-0a58ac1c1472 42
+STEP: Trying to relaunch the pod, now with labels.
+STEP: removing the label kubernetes.io/e2e-919cf90a-0435-11e9-b141-0a58ac1c1472 off the node 10-6-155-34
+STEP: verifying the node doesn't have the label kubernetes.io/e2e-919cf90a-0435-11e9-b141-0a58ac1c1472
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 08:59:39.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-sched-pred-vt7b9" for this suite.
+Dec 20 08:59:49.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 08:59:49.665: INFO: namespace: e2e-tests-sched-pred-vt7b9, resource: bindings, ignored listing per whitelist
+Dec 20 08:59:49.763: INFO: namespace e2e-tests-sched-pred-vt7b9 deletion completed in 10.209798279s
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70
+
+• [SLOW TEST:20.564 seconds]
+[sig-scheduling] SchedulerPredicates [Serial]
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
+  validates that NodeSelector is respected if matching  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSS
+------------------------------
+[sig-apps] Deployment 
+  deployment should support rollover [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 08:59:49.763: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename deployment
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
+[It] deployment should support rollover [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+Dec 20 08:59:49.918: INFO: Pod name rollover-pod: Found 0 pods out of 1
+Dec 20 08:59:54.924: INFO: Pod name rollover-pod: Found 1 pods out of 1
+STEP: ensuring each pod is running
+Dec 20 08:59:54.924: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
+Dec 20 08:59:56.935: INFO: Creating deployment "test-rollover-deployment"
+Dec 20 08:59:56.945: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
+Dec 20 08:59:58.956: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
+Dec 20 08:59:58.971: INFO: Ensure that both replica sets have 1 created replica
+Dec 20 08:59:58.985: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
+Dec 20 08:59:59.003: INFO: Updating deployment test-rollover-deployment
+Dec 20 08:59:59.003: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
+Dec 20 09:00:01.021: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
+Dec 20 09:00:01.044: INFO: Make sure deployment "test-rollover-deployment" is complete
+Dec 20 09:00:01.059: INFO: all replica sets need to contain the pod-template-hash label
+Dec 20 09:00:01.059: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63680893196, loc:(*time.Location)(0x7b33b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63680893196, loc:(*time.Location)(0x7b33b80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63680893199, loc:(*time.Location)(0x7b33b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63680893196, loc:(*time.Location)(0x7b33b80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Dec 20 09:00:03.069: INFO: all replica sets need to contain the pod-template-hash label
+Dec 20 09:00:03.069: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, 
+Dec 20 09:00:15.079: INFO: 
+Dec 20 09:00:15.079: INFO: Ensure that both old replica sets have no replicas
+docker-pullable://gcr.io/kubernetes-e2e-test-images/redis-amd64@sha256:2238f5a02d2648d41cc94a88f084060fbfa860890220328eb92696bf2ac649c9 docker://a8ab49cccbe353323c34924a3ace843e4dfd5930600bd82a335aca0174d4a813}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 09:00:15.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-deployment-jk8cp" for this suite.
+Dec 20 09:00:21.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 09:00:21.300: INFO: namespace: e2e-tests-deployment-jk8cp, resource: bindings, ignored listing per whitelist
+Dec 20 09:00:21.432: INFO: namespace e2e-tests-deployment-jk8cp deletion completed in 6.186858462s
+
+• [SLOW TEST:31.669 seconds]
+[sig-apps] Deployment
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  deployment should support rollover [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSS
+------------------------------
+[k8s.io] Variable Expansion 
+  should allow composing env vars into new env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Variable Expansion
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 09:00:21.432: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename var-expansion
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test env composition
+Dec 20 09:00:21.643: INFO: Waiting up to 5m0s for pod "var-expansion-ae5628c6-0435-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-var-expansion-rx8tt" to be "success or failure"
+Dec 20 09:00:21.660: INFO: Pod "var-expansion-ae5628c6-0435-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 17.269364ms
+Dec 20 09:00:23.674: INFO: Pod "var-expansion-ae5628c6-0435-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031035788s
+Dec 20 09:00:25.696: INFO: Pod "var-expansion-ae5628c6-0435-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053440008s
+STEP: Saw pod success
+Dec 20 09:00:25.696: INFO: Pod "var-expansion-ae5628c6-0435-11e9-b141-0a58ac1c1472" satisfied condition "success or failure"
+Dec 20 09:00:25.729: INFO: Trying to get logs from node 10-6-155-34 pod var-expansion-ae5628c6-0435-11e9-b141-0a58ac1c1472 container dapi-container: 
+STEP: delete the pod
+Dec 20 09:00:25.764: INFO: Waiting for pod var-expansion-ae5628c6-0435-11e9-b141-0a58ac1c1472 to disappear
+Dec 20 09:00:25.772: INFO: Pod var-expansion-ae5628c6-0435-11e9-b141-0a58ac1c1472 no longer exists
+[AfterEach] [k8s.io] Variable Expansion
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 09:00:25.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-var-expansion-rx8tt" for this suite.
+Dec 20 09:00:31.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 09:00:31.845: INFO: namespace: e2e-tests-var-expansion-rx8tt, resource: bindings, ignored listing per whitelist
+Dec 20 09:00:32.008: INFO: namespace e2e-tests-var-expansion-rx8tt deletion completed in 6.226975344s
+
+• [SLOW TEST:10.576 seconds]
+[k8s.io] Variable Expansion
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  should allow composing env vars into new env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+S
+------------------------------
+[sig-apps] Deployment 
+  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 09:00:32.008: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename deployment
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
+[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+Dec 20 09:00:32.282: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
+Dec 20 09:00:32.293: INFO: Pod name sample-pod: Found 0 pods out of 1
+Dec 20 09:00:37.299: INFO: Pod name sample-pod: Found 1 pods out of 1
+STEP: ensuring each pod is running
+Dec 20 09:00:37.299: INFO: Creating deployment "test-rolling-update-deployment"
+Dec 20 09:00:37.307: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
+Dec 20 09:00:37.314: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
+Dec 20 09:00:39.322: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
+Dec 20 09:00:39.326: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63680893237, loc:(*time.Location)(0x7b33b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63680893237, loc:(*time.Location)(0x7b33b80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63680893237, loc:(*time.Location)(0x7b33b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63680893237, loc:(*time.Location)(0x7b33b80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-68b55d7bc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Dec 20 09:00:41.346: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
+Dec 20 09:00:41.366: INFO: Deployment "test-rolling-update-deployment":
+&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-fbsvq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fbsvq/deployments/test-rolling-update-deployment,UID:b7ae8778-0435-11e9-b07b-0242ac120004,ResourceVersion:969786,Generation:1,CreationTimestamp:2018-12-20 09:00:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2018-12-20 09:00:37 +0000 UTC 2018-12-20 09:00:37 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2018-12-20 09:00:41 +0000 UTC 2018-12-20 09:00:37 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-68b55d7bc6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}
+
+docker-pullable://gcr.io/kubernetes-e2e-test-images/redis-amd64@sha256:2238f5a02d2648d41cc94a88f084060fbfa860890220328eb92696bf2ac649c9 docker://0656b1b4fdedfa3620e95c806c24220fa8745badcaf0457935449a2240227c20}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 09:00:41.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-deployment-fbsvq" for this suite.
+Dec 20 09:00:47.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 09:00:47.571: INFO: namespace: e2e-tests-deployment-fbsvq, resource: bindings, ignored listing per whitelist
+Dec 20 09:00:47.599: INFO: namespace e2e-tests-deployment-fbsvq deletion completed in 6.193908009s
+
+• [SLOW TEST:15.591 seconds]
+[sig-apps] Deployment
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
+  should create an rc from an image  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 09:00:47.600: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
+[BeforeEach] [k8s.io] Kubectl run rc
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
+[It] should create an rc from an image  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: running the image docker.io/library/nginx:1.14-alpine
+Dec 20 09:00:47.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-pngrs'
+Dec 20 09:00:48.077: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
+Dec 20 09:00:48.077: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
+STEP: verifying the rc e2e-test-nginx-rc was created
+STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
+STEP: confirm that you can get logs from an rc
+Dec 20 09:00:48.097: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-jrrj7]
+Dec 20 09:00:48.097: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-jrrj7" in namespace "e2e-tests-kubectl-pngrs" to be "running and ready"
+Dec 20 09:00:48.108: INFO: Pod "e2e-test-nginx-rc-jrrj7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.267346ms
+Dec 20 09:00:50.115: INFO: Pod "e2e-test-nginx-rc-jrrj7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017182163s
+Dec 20 09:00:52.119: INFO: Pod "e2e-test-nginx-rc-jrrj7": Phase="Running", Reason="", readiness=true. Elapsed: 4.021490115s
+Dec 20 09:00:52.119: INFO: Pod "e2e-test-nginx-rc-jrrj7" satisfied condition "running and ready"
+Dec 20 09:00:52.119: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-jrrj7]
+Dec 20 09:00:52.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-pngrs'
+Dec 20 09:00:52.404: INFO: stderr: ""
+Dec 20 09:00:52.405: INFO: stdout: ""
+[AfterEach] [k8s.io] Kubectl run rc
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
+Dec 20 09:00:52.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-pngrs'
+Dec 20 09:00:52.689: INFO: stderr: ""
+Dec 20 09:00:52.689: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 09:00:52.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-kubectl-pngrs" for this suite.
+Dec 20 09:01:14.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 09:01:14.785: INFO: namespace: e2e-tests-kubectl-pngrs, resource: bindings, ignored listing per whitelist
+Dec 20 09:01:14.910: INFO: namespace e2e-tests-kubectl-pngrs deletion completed in 22.211594312s
+
+• [SLOW TEST:27.310 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
+  [k8s.io] Kubectl run rc
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+    should create an rc from an image  [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-network] Proxy version v1 
+  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] version v1
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 09:01:14.910: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename proxy
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+Dec 20 09:01:15.102: INFO: (0) /api/v1/nodes/10-6-155-33:10250/proxy/logs/: 
+anaconda/
+audit/
+boot.log
+[AfterEach] version v1
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 09:01:15.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-proxy-hxdtd" for this suite.
+Dec 20 09:01:21.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 09:01:21.505: INFO: namespace: e2e-tests-proxy-hxdtd, resource: bindings, ignored listing per whitelist
+Dec 20 09:01:21.719: INFO: namespace e2e-tests-proxy-hxdtd deletion completed in 6.453780474s
+
+• [SLOW TEST:6.809 seconds]
+[sig-network] Proxy
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
+  version v1
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
+    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl describe 
+  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 09:01:21.719: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
+[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+Dec 20 09:01:21.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 version --client'
+Dec 20 09:01:22.061: INFO: stderr: ""
+Dec 20 09:01:22.061: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.0\", GitCommit:\"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d\", GitTreeState:\"clean\", BuildDate:\"2018-12-03T21:04:45Z\", GoVersion:\"go1.11.2\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
+Dec 20 09:01:22.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 create -f - --namespace=e2e-tests-kubectl-9j7cv'
+Dec 20 09:01:22.412: INFO: stderr: ""
+Dec 20 09:01:22.412: INFO: stdout: "replicationcontroller/redis-master created\n"
+Dec 20 09:01:22.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 create -f - --namespace=e2e-tests-kubectl-9j7cv'
+Dec 20 09:01:22.656: INFO: stderr: ""
+Dec 20 09:01:22.656: INFO: stdout: "service/redis-master created\n"
+STEP: Waiting for Redis master to start.
+Dec 20 09:01:23.667: INFO: Selector matched 1 pods for map[app:redis]
+Dec 20 09:01:23.667: INFO: Found 0 / 1
+Dec 20 09:01:24.670: INFO: Selector matched 1 pods for map[app:redis]
+Dec 20 09:01:24.670: INFO: Found 0 / 1
+Dec 20 09:01:25.667: INFO: Selector matched 1 pods for map[app:redis]
+Dec 20 09:01:25.667: INFO: Found 1 / 1
+Dec 20 09:01:25.667: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
+Dec 20 09:01:25.686: INFO: Selector matched 1 pods for map[app:redis]
+Dec 20 09:01:25.686: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
+Dec 20 09:01:25.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 describe pod redis-master-mvjmf --namespace=e2e-tests-kubectl-9j7cv'
+Dec 20 09:01:25.967: INFO: stderr: ""
+Dec 20 09:01:25.967: INFO: stdout: "Name:           redis-master-mvjmf\nNamespace:      e2e-tests-kubectl-9j7cv\nNode:           10-6-155-34/10.6.155.34\nStart Time:     Thu, 20 Dec 2018 09:01:22 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             172.28.20.93\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://fb47bfa791ab441e1266c4aa60707d73b944f903c0f2792271c996240cc4061f\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis-amd64@sha256:2238f5a02d2648d41cc94a88f084060fbfa860890220328eb92696bf2ac649c9\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 20 Dec 2018 09:01:25 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-tvsrb (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-tvsrb:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-tvsrb\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                  Message\n  ----    ------     ----  ----                  -------\n  Normal  Scheduled  3s    default-scheduler     Successfully assigned e2e-tests-kubectl-9j7cv/redis-master-mvjmf to 10-6-155-34\n  Normal  Pulled     1s    kubelet, 10-6-155-34  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    1s    kubelet, 10-6-155-34  Created container\n  Normal  Started    0s    kubelet, 10-6-155-34  Started container\n"
+Dec 20 09:01:25.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 describe rc redis-master --namespace=e2e-tests-kubectl-9j7cv'
+Dec 20 09:01:26.236: INFO: stderr: ""
+Dec 20 09:01:26.236: INFO: stdout: "Name:         redis-master\nNamespace:    e2e-tests-kubectl-9j7cv\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  4s    replication-controller  Created pod: redis-master-mvjmf\n"
+Dec 20 09:01:26.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 describe service redis-master --namespace=e2e-tests-kubectl-9j7cv'
+Dec 20 09:01:26.442: INFO: stderr: ""
+Dec 20 09:01:26.442: INFO: stdout: "Name:              redis-master\nNamespace:         e2e-tests-kubectl-9j7cv\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.110.173.224\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         172.28.20.93:6379\nSession Affinity:  None\nEvents:            \n"
+Dec 20 09:01:26.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 describe node 10-6-155-33'
+Dec 20 09:01:26.675: INFO: stderr: ""
+Dec 20 09:01:26.675: INFO: stdout: "Name:               10-6-155-33\nRoles:              master,registry\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/hostname=10-6-155-33\n                    node-role.kubernetes.io/master=\n                    node-role.kubernetes.io/registry=\nAnnotations:        node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Thu, 13 Dec 2018 02:57:38 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                  Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                  ------  -----------------                 ------------------                ------                       -------\n  DCEEngineNotReady     False   Thu, 20 Dec 2018 09:00:34 +0000   Mon, 17 Dec 2018 07:58:45 +0000   DCEEngineReady               DCE engine is posting ready status.\n  TimeNotSynchronized   False   Thu, 20 Dec 2018 09:00:34 +0000   Mon, 17 Dec 2018 07:58:45 +0000   TimeSynchronized             The time of the node is synchronized\n  OutOfDisk             False   Thu, 20 Dec 2018 07:14:09 +0000   Thu, 13 Dec 2018 02:57:30 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available\n  MemoryPressure        False   Thu, 20 Dec 2018 09:01:20 +0000   Thu, 13 Dec 2018 02:57:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure          False   Thu, 20 Dec 2018 09:01:20 +0000   Thu, 13 Dec 2018 02:57:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure           False   Thu, 20 Dec 2018 09:01:20 +0000   Thu, 13 Dec 2018 02:57:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                 True    Thu, 20 Dec 2018 09:01:20 +0000   Thu, 20 Dec 2018 07:14:13 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  10.6.155.33\n  Hostname:    10-6-155-33\nCapacity:\n cpu:                8\n ephemeral-storage:  36805060Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             16267516Ki\n pods:               110\nAllocatable:\n cpu:                5340m\n ephemeral-storage:  33919543240\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             10498300Ki\n pods:               110\nSystem Info:\n Machine ID:                 4df6e26c545742d48d061240c1d184ab\n System UUID:                42348703-6916-C154-36F2-93BD58139E49\n Boot ID:                    95c25c28-6d8f-483d-95e0-be1789cf213e\n Kernel Version:             3.10.0-693.el7.x86_64\n OS Image:                   CentOS Linux 7 (Core)\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://17.3.2\n Kubelet Version:            v1.13.1\n Kube-Proxy Version:         v1.13.1\nPodCIDR:                     172.28.0.0/24\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits    AGE\n  ---------                  ----                                          ------------  ----------  ---------------  -------------    ---\n  default                    d2048-2048-7b95b48c9b-n6hqw                   50m (0%)      50m (0%)    50M (0%)         50M (0%)         102m\n  default                    wordpress-wordpress-97f5cbb67-6j958           500m (9%)     500m (9%)   1073741824 (9%)  1073741824 (9%)  3d5h\n  default                    wordpress-wordpress-mysql-75d5f8f644-tbzfh    500m (9%)     500m (9%)   1073741824 (9%)  1073741824 (9%)  7d5h\n  kube-system                calico-kube-controllers-5dd6c6f8bc-4xfk4      412m (7%)     412m (7%)   845Mi (8%)       845Mi (8%)       3d5h\n  kube-system                calico-node-lbxlp                             250m (4%)     250m (4%)   500Mi (4%)       500Mi (4%)       106m\n  kube-system                coredns-87987d698-4brj5                       250m (4%)     250m (4%)   500Mi (4%)       500Mi (4%)       3d5h\n  kube-system                coredns-87987d698-55xbs                       250m (4%)     250m (4%)   500Mi (4%)       500Mi (4%)       7d5h\n  kube-system                kube-proxy-84x26                              250m (4%)     250m (4%)   500Mi (4%)       500Mi (4%)       105m\n  kube-system                smokeping-sb4jz                               125m (2%)     125m (2%)   250Mi (2%)       250Mi (2%)       7d5h\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests          Limits\n  --------           --------          ------\n  cpu                2587m (48%)       2587m (48%)\n  memory             5442826368 (50%)  5442826368 (50%)\n  ephemeral-storage  0 (0%)            0 (0%)\nEvents:\n  Type    Reason                   Age   From                     Message\n  ----    ------                   ----  ----                     -------\n  Normal  Starting                 24h   kube-proxy, 10-6-155-33  Starting kube-proxy.\n  Normal  Starting                 107m  kubelet, 10-6-155-33     Starting kubelet.\n  Normal  NodeHasSufficientMemory  107m  kubelet, 10-6-155-33     Node 10-6-155-33 status is now: NodeHasSufficientMemory\n  Normal  NodeHasNoDiskPressure    107m  kubelet, 10-6-155-33     Node 10-6-155-33 status is now: NodeHasNoDiskPressure\n  Normal  NodeHasSufficientPID     107m  kubelet, 10-6-155-33     Node 10-6-155-33 status is now: NodeHasSufficientPID\n  Normal  NodeNotReady             107m  kubelet, 10-6-155-33     Node 10-6-155-33 status is now: NodeNotReady\n  Normal  NodeAllocatableEnforced  107m  kubelet, 10-6-155-33     Updated Node Allocatable limit across pods\n  Normal  NodeReady                107m  kubelet, 10-6-155-33     Node 10-6-155-33 status is now: NodeReady\n  Normal  Starting                 105m  kube-proxy, 10-6-155-33  Starting kube-proxy.\n"
+Dec 20 09:01:26.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 describe namespace e2e-tests-kubectl-9j7cv'
+Dec 20 09:01:26.940: INFO: stderr: ""
+Dec 20 09:01:26.940: INFO: stdout: "Name:         e2e-tests-kubectl-9j7cv\nLabels:       e2e-framework=kubectl\n              e2e-run=ed5ee1a0-0427-11e9-b141-0a58ac1c1472\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 09:01:26.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-kubectl-9j7cv" for this suite.
+Dec 20 09:01:48.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 09:01:49.064: INFO: namespace: e2e-tests-kubectl-9j7cv, resource: bindings, ignored listing per whitelist
+Dec 20 09:01:49.096: INFO: namespace e2e-tests-kubectl-9j7cv deletion completed in 22.14746256s
+
+• [SLOW TEST:27.377 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
+  [k8s.io] Kubectl describe
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSS
+------------------------------
+[sig-network] Networking Granular Checks: Pods 
+  should function for node-pod communication: http [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-network] Networking
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 09:01:49.096: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename pod-network-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should function for node-pod communication: http [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-lx7q7
+STEP: creating a selector
+STEP: Creating the service pods in kubernetes
+Dec 20 09:01:49.219: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
+STEP: Creating test pods
+Dec 20 09:02:15.380: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.28.20.121:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-lx7q7 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Dec 20 09:02:15.380: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+Dec 20 09:02:15.948: INFO: Found all expected endpoints: [netserver-0]
+Dec 20 09:02:15.954: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.28.240.105:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-lx7q7 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Dec 20 09:02:15.954: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+Dec 20 09:02:16.248: INFO: Found all expected endpoints: [netserver-1]
+[AfterEach] [sig-network] Networking
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 09:02:16.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-pod-network-test-lx7q7" for this suite.
+Dec 20 09:02:38.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 09:02:38.430: INFO: namespace: e2e-tests-pod-network-test-lx7q7, resource: bindings, ignored listing per whitelist
+Dec 20 09:02:38.449: INFO: namespace e2e-tests-pod-network-test-lx7q7 deletion completed in 22.189928929s
+
+• [SLOW TEST:49.353 seconds]
+[sig-network] Networking
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
+  Granular Checks: Pods
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
+    should function for node-pod communication: http [NodeConformance] [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSS
+------------------------------
+[sig-storage] Projected secret 
+  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected secret
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 09:02:38.449: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating secret with name projected-secret-test-fffbf59d-0435-11e9-b141-0a58ac1c1472
+STEP: Creating a pod to test consume secrets
+Dec 20 09:02:38.632: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fffe394d-0435-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-h72bv" to be "success or failure"
+Dec 20 09:02:38.638: INFO: Pod "pod-projected-secrets-fffe394d-0435-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 6.498438ms
+Dec 20 09:02:40.643: INFO: Pod "pod-projected-secrets-fffe394d-0435-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011338921s
+Dec 20 09:02:42.649: INFO: Pod "pod-projected-secrets-fffe394d-0435-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017530744s
+STEP: Saw pod success
+Dec 20 09:02:42.649: INFO: Pod "pod-projected-secrets-fffe394d-0435-11e9-b141-0a58ac1c1472" satisfied condition "success or failure"
+Dec 20 09:02:42.657: INFO: Trying to get logs from node 10-6-155-34 pod pod-projected-secrets-fffe394d-0435-11e9-b141-0a58ac1c1472 container secret-volume-test: 
+STEP: delete the pod
+Dec 20 09:02:42.714: INFO: Waiting for pod pod-projected-secrets-fffe394d-0435-11e9-b141-0a58ac1c1472 to disappear
+Dec 20 09:02:42.722: INFO: Pod pod-projected-secrets-fffe394d-0435-11e9-b141-0a58ac1c1472 no longer exists
+[AfterEach] [sig-storage] Projected secret
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 09:02:42.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-h72bv" for this suite.
+Dec 20 09:02:48.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 09:02:48.967: INFO: namespace: e2e-tests-projected-h72bv, resource: bindings, ignored listing per whitelist
+Dec 20 09:02:49.054: INFO: namespace e2e-tests-projected-h72bv deletion completed in 6.317248536s
+
+• [SLOW TEST:10.605 seconds]
+[sig-storage] Projected secret
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
+  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSS
+------------------------------
+[sig-storage] Secrets 
+  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 09:02:49.054: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating secret with name secret-test-0649a279-0436-11e9-b141-0a58ac1c1472
+STEP: Creating a pod to test consume secrets
+Dec 20 09:02:49.237: INFO: Waiting up to 5m0s for pod "pod-secrets-065204a7-0436-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-secrets-2m6c8" to be "success or failure"
+Dec 20 09:02:51.266: INFO: Pod "pod-secrets-065204a7-0436-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029041943s
+Dec 20 09:02:53.279: INFO: Pod "pod-secrets-065204a7-0436-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042205937s
+STEP: Saw pod success
+Dec 20 09:02:53.279: INFO: Pod "pod-secrets-065204a7-0436-11e9-b141-0a58ac1c1472" satisfied condition "success or failure"
+Dec 20 09:02:53.283: INFO: Trying to get logs from node 10-6-155-34 pod pod-secrets-065204a7-0436-11e9-b141-0a58ac1c1472 container secret-volume-test: 
+STEP: delete the pod
+Dec 20 09:02:53.384: INFO: Waiting for pod pod-secrets-065204a7-0436-11e9-b141-0a58ac1c1472 to disappear
+Dec 20 09:02:53.402: INFO: Pod pod-secrets-065204a7-0436-11e9-b141-0a58ac1c1472 no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 09:02:53.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-secrets-2m6c8" for this suite.
+Dec 20 09:02:59.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 09:02:59.626: INFO: namespace: e2e-tests-secrets-2m6c8, resource: bindings, ignored listing per whitelist
+Dec 20 09:02:59.642: INFO: namespace e2e-tests-secrets-2m6c8 deletion completed in 6.221787289s
+STEP: Destroying namespace "e2e-tests-secret-namespace-4jbv4" for this suite.
+Dec 20 09:03:05.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 09:03:05.852: INFO: namespace: e2e-tests-secret-namespace-4jbv4, resource: bindings, ignored listing per whitelist
+Dec 20 09:03:05.945: INFO: namespace e2e-tests-secret-namespace-4jbv4 deletion completed in 6.303305796s
+
+• [SLOW TEST:16.891 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
+  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Proxy server 
+  should support proxy with --port 0  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 09:03:05.945: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
+[It] should support proxy with --port 0  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: starting the proxy server
+Dec 20 09:03:06.168: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-647384748 proxy -p 0 --disable-filter'
+STEP: curling proxy /api/ output
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 09:03:06.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-kubectl-zlwkl" for this suite.
+Dec 20 09:03:12.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 09:03:12.486: INFO: namespace: e2e-tests-kubectl-zlwkl, resource: bindings, ignored listing per whitelist
+Dec 20 09:03:12.615: INFO: namespace e2e-tests-kubectl-zlwkl deletion completed in 6.274922639s
+
+• [SLOW TEST:6.670 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
+  [k8s.io] Proxy server
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+    should support proxy with --port 0  [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSS
+------------------------------
+[sig-storage] Projected secret 
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected secret
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 09:03:12.616: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating projection with secret that has name projected-secret-test-146f2ccf-0436-11e9-b141-0a58ac1c1472
+STEP: Creating a pod to test consume secrets
+Dec 20 09:03:12.920: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-146fe7a1-0436-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-tssrz" to be "success or failure"
+Dec 20 09:03:12.924: INFO: Pod "pod-projected-secrets-146fe7a1-0436-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.342112ms
+STEP: Saw pod success
+Dec 20 09:03:16.963: INFO: Pod "pod-projected-secrets-146fe7a1-0436-11e9-b141-0a58ac1c1472" satisfied condition "success or failure"
+Dec 20 09:03:16.969: INFO: Trying to get logs from node 10-6-155-34 pod pod-projected-secrets-146fe7a1-0436-11e9-b141-0a58ac1c1472 container projected-secret-volume-test: 
+STEP: delete the pod
+Dec 20 09:03:17.008: INFO: Waiting for pod pod-projected-secrets-146fe7a1-0436-11e9-b141-0a58ac1c1472 to disappear
+Dec 20 09:03:17.016: INFO: Pod pod-projected-secrets-146fe7a1-0436-11e9-b141-0a58ac1c1472 no longer exists
+[AfterEach] [sig-storage] Projected secret
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 09:03:17.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-tssrz" for this suite.
+Dec 20 09:03:23.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 09:03:23.125: INFO: namespace: e2e-tests-projected-tssrz, resource: bindings, ignored listing per whitelist
+Dec 20 09:03:23.344: INFO: namespace e2e-tests-projected-tssrz deletion completed in 6.306932964s
+
+• [SLOW TEST:10.728 seconds]
+[sig-storage] Projected secret
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 09:03:23.344: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap with name projected-configmap-test-volume-map-1ac984ef-0436-11e9-b141-0a58ac1c1472
+STEP: Creating a pod to test consume configMaps
+Dec 20 09:03:23.597: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1acbf786-0436-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-projected-xln8b" to be "success or failure"
+Dec 20 09:03:23.608: INFO: Pod "pod-projected-configmaps-1acbf786-0436-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 10.279266ms
+Dec 20 09:03:29.649: INFO: Pod "pod-projected-configmaps-1acbf786-0436-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051368893s
+STEP: Saw pod success
+Dec 20 09:03:29.649: INFO: Pod "pod-projected-configmaps-1acbf786-0436-11e9-b141-0a58ac1c1472" satisfied condition "success or failure"
+Dec 20 09:03:29.654: INFO: Trying to get logs from node 10-6-155-34 pod pod-projected-configmaps-1acbf786-0436-11e9-b141-0a58ac1c1472 container projected-configmap-volume-test: 
+STEP: delete the pod
+Dec 20 09:03:29.699: INFO: Waiting for pod pod-projected-configmaps-1acbf786-0436-11e9-b141-0a58ac1c1472 to disappear
+Dec 20 09:03:29.712: INFO: Pod pod-projected-configmaps-1acbf786-0436-11e9-b141-0a58ac1c1472 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 09:03:29.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-xln8b" for this suite.
+Dec 20 09:03:35.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 09:03:35.794: INFO: namespace: e2e-tests-projected-xln8b, resource: bindings, ignored listing per whitelist
+Dec 20 09:03:35.968: INFO: namespace e2e-tests-projected-xln8b deletion completed in 6.246380272s
+
+• [SLOW TEST:12.625 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
+  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (non-root,0666,default) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 09:03:35.969: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test emptydir 0666 on node default medium
+Dec 20 09:03:36.131: INFO: Waiting up to 5m0s for pod "pod-22457882-0436-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-emptydir-f7kpw" to be "success or failure"
+Dec 20 09:03:40.151: INFO: Pod "pod-22457882-0436-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020176231s
+STEP: Saw pod success
+Dec 20 09:03:40.151: INFO: Pod "pod-22457882-0436-11e9-b141-0a58ac1c1472" satisfied condition "success or failure"
+Dec 20 09:03:40.155: INFO: Trying to get logs from node 10-6-155-34 pod pod-22457882-0436-11e9-b141-0a58ac1c1472 container test-container: 
+STEP: delete the pod
+Dec 20 09:03:40.188: INFO: Waiting for pod pod-22457882-0436-11e9-b141-0a58ac1c1472 to disappear
+Dec 20 09:03:40.193: INFO: Pod pod-22457882-0436-11e9-b141-0a58ac1c1472 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 09:03:40.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-emptydir-f7kpw" for this suite.
+Dec 20 09:03:46.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 09:03:46.284: INFO: namespace: e2e-tests-emptydir-f7kpw, resource: bindings, ignored listing per whitelist
+Dec 20 09:03:46.508: INFO: namespace e2e-tests-emptydir-f7kpw deletion completed in 6.308200658s
+
+• [SLOW TEST:10.539 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
+  should support (non-root,0666,default) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSS
+------------------------------
+[k8s.io] Kubelet when scheduling a busybox command in a pod 
+  should print the output to logs [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 09:03:46.508: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename kubelet-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
+[It] should print the output to logs [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[AfterEach] [k8s.io] Kubelet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 09:03:50.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-kubelet-test-7jhb7" for this suite.
+Dec 20 09:04:42.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 09:04:42.919: INFO: namespace: e2e-tests-kubelet-test-7jhb7, resource: bindings, ignored listing per whitelist
+Dec 20 09:04:42.984: INFO: namespace e2e-tests-kubelet-test-7jhb7 deletion completed in 52.262196016s
+
+• [SLOW TEST:56.476 seconds]
+[k8s.io] Kubelet
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  when scheduling a busybox command in a pod
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
+    should print the output to logs [NodeConformance] [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
+  should check if Kubernetes master services is included in cluster-info  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 09:04:42.984: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
+[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: validating cluster-info
+Dec 20 09:04:43.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-647384748 cluster-info'
+Dec 20 09:04:43.534: INFO: stderr: ""
+Dec 20 09:04:43.535: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://10.96.0.1:443\x1b[0m\n\x1b[0;32mcoredns\x1b[0m is running at \x1b[0;33mhttps://10.96.0.1:443/api/v1/namespaces/kube-system/services/coredns-coredns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 09:04:43.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-kubectl-ghg7s" for this suite.
+Dec 20 09:04:49.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 09:04:49.820: INFO: namespace: e2e-tests-kubectl-ghg7s, resource: bindings, ignored listing per whitelist
+Dec 20 09:04:49.826: INFO: namespace e2e-tests-kubectl-ghg7s deletion completed in 6.275090736s
+
+• [SLOW TEST:6.841 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
+  [k8s.io] Kubectl cluster-info
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+    should check if Kubernetes master services is included in cluster-info  [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] InitContainer [NodeConformance] 
+  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 09:04:49.826: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename init-container
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
+[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: creating the pod
+[AfterEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 09:05:38.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-init-container-rcnh6" for this suite.
+Dec 20 09:06:00.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 09:06:00.585: INFO: namespace: e2e-tests-init-container-rcnh6, resource: bindings, ignored listing per whitelist
+Dec 20 09:06:00.603: INFO: namespace e2e-tests-init-container-rcnh6 deletion completed in 22.256733079s
+
+• [SLOW TEST:70.777 seconds]
+[k8s.io] InitContainer [NodeConformance]
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSS
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Dec 20 09:06:00.604: INFO: >>> kubeConfig: /tmp/kubeconfig-647384748
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap with name configmap-test-volume-787d78c6-0436-11e9-b141-0a58ac1c1472
+STEP: Creating a pod to test consume configMaps
+Dec 20 09:06:00.806: INFO: Waiting up to 5m0s for pod "pod-configmaps-787f97eb-0436-11e9-b141-0a58ac1c1472" in namespace "e2e-tests-configmap-96s5s" to be "success or failure"
+Dec 20 09:06:00.814: INFO: Pod "pod-configmaps-787f97eb-0436-11e9-b141-0a58ac1c1472": Phase="Pending", Reason="", readiness=false. Elapsed: 8.603483ms
+Dec 20 09:06:06.844: INFO: Pod "pod-configmaps-787f97eb-0436-11e9-b141-0a58ac1c1472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037985098s
+STEP: Saw pod success
+Dec 20 09:06:06.844: INFO: Pod "pod-configmaps-787f97eb-0436-11e9-b141-0a58ac1c1472" satisfied condition "success or failure"
+Dec 20 09:06:06.851: INFO: Trying to get logs from node 10-6-155-34 pod pod-configmaps-787f97eb-0436-11e9-b141-0a58ac1c1472 container configmap-volume-test: 
+STEP: delete the pod
+Dec 20 09:06:06.889: INFO: Waiting for pod pod-configmaps-787f97eb-0436-11e9-b141-0a58ac1c1472 to disappear
+Dec 20 09:06:06.893: INFO: Pod pod-configmaps-787f97eb-0436-11e9-b141-0a58ac1c1472 no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Dec 20 09:06:06.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-configmap-96s5s" for this suite.
+Dec 20 09:06:12.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Dec 20 09:06:13.003: INFO: namespace: e2e-tests-configmap-96s5s, resource: bindings, ignored listing per whitelist
+Dec 20 09:06:13.083: INFO: namespace e2e-tests-configmap-96s5s deletion completed in 6.178045817s
+
+• [SLOW TEST:12.480 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
+  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSDec 20 09:06:13.084: INFO: Running AfterSuite actions on all nodes
+Dec 20 09:06:13.136: INFO: Running AfterSuite actions on node 1
+Dec 20 09:06:13.136: INFO: Skipping dumping logs from cluster
+
+Ran 200 of 1946 Specs in 6256.041 seconds
+SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1746 Skipped PASS
+
+Ginkgo ran 1 suite in 1h44m21.430750402s
+Test Suite Passed
diff --git a/v1.13/dce/junit_01.xml b/v1.13/dce/junit_01.xml
new file mode 100644
index 0000000000..4aa3717e49
--- /dev/null
+++ b/v1.13/dce/junit_01.xml
@@ -0,0 +1,5441 @@
+
+  
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+  
\ No newline at end of file
diff --git a/v1.13/dce/version.txt b/v1.13/dce/version.txt
new file mode 100644
index 0000000000..6baebb3d2f
--- /dev/null
+++ b/v1.13/dce/version.txt
@@ -0,0 +1,2 @@
+Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
+Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
\ No newline at end of file