This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-03-30 20:52
Elapsed2h0m
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/73fc24a8-c111-4e64-9ff6-933b8a391a4b/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/73fc24a8-c111-4e64-9ff6-933b8a391a4b/targets/test

No Test Failures!


Error lines from build-log.txt

... skipping 191 lines ...
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
INFO: Invocation ID: eeae8295-9801-4389-b891-a3322451815f
Loading: 
Loading: 0 packages loaded
Loading: 0 packages loaded
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/bazelbuild/rules_go/releases/download/v0.22.2/rules_go-v0.22.2.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/kubernetes/repo-infra/archive/v0.0.3.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
Loading: 0 packages loaded
Loading: 0 packages loaded
    currently loading: test/e2e ... (3 packages)
Analyzing: 3 targets (3 packages loaded, 0 targets configured)
Analyzing: 3 targets (16 packages loaded, 9 targets configured)
Analyzing: 3 targets (16 packages loaded, 9 targets configured)
... skipping 1796 lines ...
    ubuntu-1804:
    ubuntu-1804: TASK [sysprep : Truncate shell history] ****************************************
    ubuntu-1804: ok: [default] => (item={u'path': u'/root/.bash_history'})
    ubuntu-1804: ok: [default] => (item={u'path': u'/home/ubuntu/.bash_history'})
    ubuntu-1804:
    ubuntu-1804: PLAY RECAP *********************************************************************
    ubuntu-1804: default                    : ok=60   changed=46   unreachable=0    failed=0    skipped=72   rescued=0    ignored=0
    ubuntu-1804:
==> ubuntu-1804: Deleting instance...
    ubuntu-1804: Instance has been deleted!
==> ubuntu-1804: Creating image...
==> ubuntu-1804: Deleting disk...
    ubuntu-1804: Disk has been deleted!
... skipping 240 lines ...
# Wait for the kubeconfig to become available.
timeout 300 bash -c "while ! kubectl get secrets | grep test1-kubeconfig; do sleep 1; done"
test1-kubeconfig            Opaque                                1      1s
# Get kubeconfig and store it locally.
kubectl get secrets test1-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout 600 bash -c "while ! kubectl --kubeconfig=./kubeconfig get nodes | grep master; do sleep 1; done"
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
error: the server doesn't have a resource type "nodes"
No resources found in default namespace.
No resources found in default namespace.
No resources found in default namespace.
No resources found in default namespace.
No resources found in default namespace.
No resources found in default namespace.
... skipping 60 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:19:33]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:19:43]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
++ jq -r '.items[].status.phase'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:19:54]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
++ jq -r '.items[].status.phase'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:20:04]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:20:14]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:20:24]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ jq -r '.items[].status.phase'
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:20:35]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 0 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:20:45]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:20:55]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:21:06]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
++ jq -r '.items[].status.phase'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:21:16]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:21:26]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:21:36]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:21:47]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:21:57]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:22:07]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:22:17]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:22:28]'
... skipping 11 lines ...
+ read running total
++ jq -r '.items[].status.phase'
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:22:38]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ jq -r '.items[].status.phase'
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:22:48]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ jq -r '.items[].status.phase'
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:22:58]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:23:09]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 1 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:23:19]'
... skipping 11 lines ...
+ read running total
++ kubectl get machines --context=kind-clusterapi -o json
++ jq -r '.items[].status.phase'
++ awk 'BEGIN{count=0} /(r|R)unning/{count++} END{print count " " NR}'
+ [[ 3 == \3 ]]
+ [[ 2 == \3 ]]
+ read failed total
++ kubectl get machines --context=kind-clusterapi -o json
++ awk 'BEGIN{count=0} /(f|F)ailed/{count++} END{print count " " NR}'
++ jq -r '.items[].status.phase'
+ [[ ! 0 -eq 0 ]]
++ date '+[%H:%M:%S]'
+ timestamp='[21:23:29]'
... skipping 34 lines ...
++ go env GOPATH
+ cd /home/prow/go/src/k8s.io/kubernetes
+ ./hack/ginkgo-e2e.sh --provider=skeleton --num-nodes=2 '--ginkgo.focus=\[Conformance\]' --ginkgo.skip= --report-dir=/logs/artifacts --disable-log-dump=true
Conformance test: not doing test setup.
I0330 21:23:57.962746   26158 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
I0330 21:23:57.962892   26158 e2e.go:124] Starting e2e run "26702e33-9477-43de-a239-0bfecd00b78b" on Ginkgo node 1
{"msg":"Test Suite starting","total":283,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1585603436 - Will randomize all specs
Will run 283 of 4993 specs

Mar 30 21:23:57.980: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 30 21:23:57.995: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Mar 30 21:23:58.120: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Mar 30 21:23:58.266: INFO: The status of Pod calico-node-hnlbd is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Mar 30 21:23:58.266: INFO: 12 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar 30 21:23:58.266: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready.
Mar 30 21:23:58.266: INFO: POD                NODE                                               PHASE    GRACE  CONDITIONS
Mar 30 21:23:58.266: INFO: calico-node-hnlbd  test1-md-0-nfkzj.c.kubernetes-es-logging.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:35 +0000 UTC ContainersNotReady containers with unready status: [calico-node]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:35 +0000 UTC ContainersNotReady containers with unready status: [calico-node]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:35 +0000 UTC  }]
Mar 30 21:23:58.266: INFO: 
Mar 30 21:24:00.422: INFO: The status of Pod calico-node-hnlbd is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Mar 30 21:24:00.422: INFO: 12 / 13 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
Mar 30 21:24:00.422: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready.
Mar 30 21:24:00.422: INFO: POD                NODE                                               PHASE    GRACE  CONDITIONS
Mar 30 21:24:00.422: INFO: calico-node-hnlbd  test1-md-0-nfkzj.c.kubernetes-es-logging.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:35 +0000 UTC ContainersNotReady containers with unready status: [calico-node]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:35 +0000 UTC ContainersNotReady containers with unready status: [calico-node]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:35 +0000 UTC  }]
Mar 30 21:24:00.422: INFO: 
Mar 30 21:24:02.412: INFO: The status of Pod calico-node-hnlbd is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Mar 30 21:24:02.412: INFO: 12 / 13 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
Mar 30 21:24:02.412: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready.
Mar 30 21:24:02.412: INFO: POD                NODE                                               PHASE    GRACE  CONDITIONS
Mar 30 21:24:02.412: INFO: calico-node-hnlbd  test1-md-0-nfkzj.c.kubernetes-es-logging.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:35 +0000 UTC ContainersNotReady containers with unready status: [calico-node]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:35 +0000 UTC ContainersNotReady containers with unready status: [calico-node]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:35 +0000 UTC  }]
Mar 30 21:24:02.412: INFO: 
Mar 30 21:24:04.411: INFO: The status of Pod calico-node-hnlbd is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Mar 30 21:24:04.411: INFO: 12 / 13 pods in namespace 'kube-system' are running and ready (6 seconds elapsed)
Mar 30 21:24:04.411: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready.
Mar 30 21:24:04.411: INFO: POD                NODE                                               PHASE    GRACE  CONDITIONS
Mar 30 21:24:04.411: INFO: calico-node-hnlbd  test1-md-0-nfkzj.c.kubernetes-es-logging.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:35 +0000 UTC ContainersNotReady containers with unready status: [calico-node]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:35 +0000 UTC ContainersNotReady containers with unready status: [calico-node]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:35 +0000 UTC  }]
Mar 30 21:24:04.411: INFO: 
Mar 30 21:24:06.410: INFO: The status of Pod calico-node-hnlbd is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Mar 30 21:24:06.410: INFO: 12 / 13 pods in namespace 'kube-system' are running and ready (8 seconds elapsed)
Mar 30 21:24:06.410: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready.
Mar 30 21:24:06.410: INFO: POD                NODE                                               PHASE    GRACE  CONDITIONS
Mar 30 21:24:06.410: INFO: calico-node-hnlbd  test1-md-0-nfkzj.c.kubernetes-es-logging.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:35 +0000 UTC ContainersNotReady containers with unready status: [calico-node]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:35 +0000 UTC ContainersNotReady containers with unready status: [calico-node]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:35 +0000 UTC  }]
Mar 30 21:24:06.410: INFO: 
Mar 30 21:24:08.411: INFO: The status of Pod calico-node-hnlbd is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Mar 30 21:24:08.411: INFO: 12 / 13 pods in namespace 'kube-system' are running and ready (10 seconds elapsed)
Mar 30 21:24:08.411: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready.
Mar 30 21:24:08.411: INFO: POD                NODE                                               PHASE    GRACE  CONDITIONS
Mar 30 21:24:08.411: INFO: calico-node-hnlbd  test1-md-0-nfkzj.c.kubernetes-es-logging.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:35 +0000 UTC ContainersNotReady containers with unready status: [calico-node]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:35 +0000 UTC ContainersNotReady containers with unready status: [calico-node]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:23:35 +0000 UTC  }]
Mar 30 21:24:08.411: INFO: 
Mar 30 21:24:10.418: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (12 seconds elapsed)
... skipping 41 lines ...
  test/e2e/framework/framework.go:175
Mar 30 21:24:19.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6843" for this suite.
STEP: Destroying namespace "webhook-6843-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":283,"completed":1,"skipped":22,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-6c6853aa-3480-4e28-b91d-cee2da4e1deb
STEP: Creating a pod to test consume configMaps
Mar 30 21:24:19.679: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-57784bb2-b17d-43d7-aa35-e1ceff7c2d83" in namespace "projected-5445" to be "Succeeded or Failed"
Mar 30 21:24:19.709: INFO: Pod "pod-projected-configmaps-57784bb2-b17d-43d7-aa35-e1ceff7c2d83": Phase="Pending", Reason="", readiness=false. Elapsed: 30.373982ms
Mar 30 21:24:21.740: INFO: Pod "pod-projected-configmaps-57784bb2-b17d-43d7-aa35-e1ceff7c2d83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061219634s
STEP: Saw pod success
Mar 30 21:24:21.740: INFO: Pod "pod-projected-configmaps-57784bb2-b17d-43d7-aa35-e1ceff7c2d83" satisfied condition "Succeeded or Failed"
Mar 30 21:24:21.772: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-projected-configmaps-57784bb2-b17d-43d7-aa35-e1ceff7c2d83 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 21:24:21.866: INFO: Waiting for pod pod-projected-configmaps-57784bb2-b17d-43d7-aa35-e1ceff7c2d83 to disappear
Mar 30 21:24:21.896: INFO: Pod pod-projected-configmaps-57784bb2-b17d-43d7-aa35-e1ceff7c2d83 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 30 21:24:21.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5445" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":2,"skipped":72,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 29 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 30 21:24:33.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4360" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":283,"completed":3,"skipped":107,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 22 lines ...
Mar 30 21:24:43.630: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-183 /api/v1/namespaces/watch-183/configmaps/e2e-watch-test-label-changed 52e4a3c6-79ef-420e-bebf-6c06eb31e9af 1581 0 2020-03-30 21:24:33 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 30 21:24:43.630: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-183 /api/v1/namespaces/watch-183/configmaps/e2e-watch-test-label-changed 52e4a3c6-79ef-420e-bebf-6c06eb31e9af 1582 0 2020-03-30 21:24:33 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 30 21:24:43.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-183" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":283,"completed":4,"skipped":137,"failed":0}
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 26 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 30 21:24:45.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9459" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":283,"completed":5,"skipped":145,"failed":0}
S
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 30 21:24:55.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1120" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":283,"completed":6,"skipped":146,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-821bde24-1e3f-4fec-80f3-6a6ff31a27dc
STEP: Creating a pod to test consume secrets
Mar 30 21:24:56.145: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-275e8556-f03b-42b9-b6c2-dbb2efc517f1" in namespace "projected-2365" to be "Succeeded or Failed"
Mar 30 21:24:56.176: INFO: Pod "pod-projected-secrets-275e8556-f03b-42b9-b6c2-dbb2efc517f1": Phase="Pending", Reason="", readiness=false. Elapsed: 30.543209ms
Mar 30 21:24:58.207: INFO: Pod "pod-projected-secrets-275e8556-f03b-42b9-b6c2-dbb2efc517f1": Phase="Running", Reason="", readiness=true. Elapsed: 2.061416781s
Mar 30 21:25:00.239: INFO: Pod "pod-projected-secrets-275e8556-f03b-42b9-b6c2-dbb2efc517f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094210465s
STEP: Saw pod success
Mar 30 21:25:00.239: INFO: Pod "pod-projected-secrets-275e8556-f03b-42b9-b6c2-dbb2efc517f1" satisfied condition "Succeeded or Failed"
Mar 30 21:25:00.269: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-projected-secrets-275e8556-f03b-42b9-b6c2-dbb2efc517f1 container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 30 21:25:00.379: INFO: Waiting for pod pod-projected-secrets-275e8556-f03b-42b9-b6c2-dbb2efc517f1 to disappear
Mar 30 21:25:00.409: INFO: Pod pod-projected-secrets-275e8556-f03b-42b9-b6c2-dbb2efc517f1 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 30 21:25:00.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2365" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":7,"skipped":162,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
Mar 30 21:25:05.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5872" for this suite.
STEP: Destroying namespace "webhook-5872-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":283,"completed":8,"skipped":184,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 8 lines ...
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 30 21:25:11.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9159" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":283,"completed":9,"skipped":204,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Mar 30 21:25:16.780: INFO: stderr: ""
Mar 30 21:25:16.780: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-380-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 21:25:19.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3447" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":283,"completed":10,"skipped":219,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 105 lines ...
<a href="btmp">btmp</a>
<a href="ch... (200; 33.023725ms)
[AfterEach] version v1
  test/e2e/framework/framework.go:175
Mar 30 21:25:20.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3109" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":283,"completed":11,"skipped":267,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-92b88911-a7b1-4c39-a1cf-a3ba40c878d5
STEP: Creating a pod to test consume configMaps
Mar 30 21:25:20.666: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6c4c6509-679f-4347-bdfd-97e1b4bc88ca" in namespace "projected-1433" to be "Succeeded or Failed"
Mar 30 21:25:20.700: INFO: Pod "pod-projected-configmaps-6c4c6509-679f-4347-bdfd-97e1b4bc88ca": Phase="Pending", Reason="", readiness=false. Elapsed: 33.367489ms
Mar 30 21:25:22.730: INFO: Pod "pod-projected-configmaps-6c4c6509-679f-4347-bdfd-97e1b4bc88ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0632674s
STEP: Saw pod success
Mar 30 21:25:22.730: INFO: Pod "pod-projected-configmaps-6c4c6509-679f-4347-bdfd-97e1b4bc88ca" satisfied condition "Succeeded or Failed"
Mar 30 21:25:22.759: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-projected-configmaps-6c4c6509-679f-4347-bdfd-97e1b4bc88ca container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 21:25:22.849: INFO: Waiting for pod pod-projected-configmaps-6c4c6509-679f-4347-bdfd-97e1b4bc88ca to disappear
Mar 30 21:25:22.879: INFO: Pod pod-projected-configmaps-6c4c6509-679f-4347-bdfd-97e1b4bc88ca no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 30 21:25:22.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1433" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":283,"completed":12,"skipped":284,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Mar 30 21:25:23.094: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 30 21:25:25.926: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 21:25:38.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7303" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":283,"completed":13,"skipped":301,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
  test/e2e/framework/framework.go:175
Mar 30 21:25:43.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5429" for this suite.
STEP: Destroying namespace "webhook-5429-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":283,"completed":14,"skipped":303,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Mar 30 21:25:44.425: INFO: stderr: ""
Mar 30 21:25:44.425: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://35.241.26.221:443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://35.241.26.221:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 21:25:44.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6405" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":283,"completed":15,"skipped":330,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 28 lines ...
  test/e2e/framework/framework.go:175
Mar 30 21:26:01.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7202" for this suite.
STEP: Destroying namespace "webhook-7202-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":283,"completed":16,"skipped":358,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 17 lines ...
Mar 30 21:26:09.863: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 30 21:26:09.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7013" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":283,"completed":17,"skipped":387,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-c10a4ce1-ad41-429e-8159-f4aa7814b4ec
STEP: Creating a pod to test consume secrets
Mar 30 21:26:10.173: INFO: Waiting up to 5m0s for pod "pod-secrets-5e9c99a9-2fd0-41b3-807e-529cb7977112" in namespace "secrets-216" to be "Succeeded or Failed"
Mar 30 21:26:10.204: INFO: Pod "pod-secrets-5e9c99a9-2fd0-41b3-807e-529cb7977112": Phase="Pending", Reason="", readiness=false. Elapsed: 30.758161ms
Mar 30 21:26:12.234: INFO: Pod "pod-secrets-5e9c99a9-2fd0-41b3-807e-529cb7977112": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06064192s
STEP: Saw pod success
Mar 30 21:26:12.234: INFO: Pod "pod-secrets-5e9c99a9-2fd0-41b3-807e-529cb7977112" satisfied condition "Succeeded or Failed"
Mar 30 21:26:12.263: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-secrets-5e9c99a9-2fd0-41b3-807e-529cb7977112 container secret-volume-test: <nil>
STEP: delete the pod
Mar 30 21:26:12.337: INFO: Waiting for pod pod-secrets-5e9c99a9-2fd0-41b3-807e-529cb7977112 to disappear
Mar 30 21:26:12.366: INFO: Pod pod-secrets-5e9c99a9-2fd0-41b3-807e-529cb7977112 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 30 21:26:12.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-216" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":18,"skipped":394,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Mar 30 21:26:12.452: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override arguments
Mar 30 21:26:12.614: INFO: Waiting up to 5m0s for pod "client-containers-9875fd16-346e-4499-8622-de995f671778" in namespace "containers-9207" to be "Succeeded or Failed"
Mar 30 21:26:12.643: INFO: Pod "client-containers-9875fd16-346e-4499-8622-de995f671778": Phase="Pending", Reason="", readiness=false. Elapsed: 29.458614ms
Mar 30 21:26:14.674: INFO: Pod "client-containers-9875fd16-346e-4499-8622-de995f671778": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059860401s
STEP: Saw pod success
Mar 30 21:26:14.674: INFO: Pod "client-containers-9875fd16-346e-4499-8622-de995f671778" satisfied condition "Succeeded or Failed"
Mar 30 21:26:14.704: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod client-containers-9875fd16-346e-4499-8622-de995f671778 container test-container: <nil>
STEP: delete the pod
Mar 30 21:26:14.780: INFO: Waiting for pod client-containers-9875fd16-346e-4499-8622-de995f671778 to disappear
Mar 30 21:26:14.809: INFO: Pod client-containers-9875fd16-346e-4499-8622-de995f671778 no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 30 21:26:14.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9207" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":283,"completed":19,"skipped":414,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 21:26:15.060: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca89e74a-519d-47ad-b263-a707adf4577c" in namespace "projected-8211" to be "Succeeded or Failed"
Mar 30 21:26:15.097: INFO: Pod "downwardapi-volume-ca89e74a-519d-47ad-b263-a707adf4577c": Phase="Pending", Reason="", readiness=false. Elapsed: 36.628854ms
Mar 30 21:26:17.126: INFO: Pod "downwardapi-volume-ca89e74a-519d-47ad-b263-a707adf4577c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065986411s
STEP: Saw pod success
Mar 30 21:26:17.126: INFO: Pod "downwardapi-volume-ca89e74a-519d-47ad-b263-a707adf4577c" satisfied condition "Succeeded or Failed"
Mar 30 21:26:17.155: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod downwardapi-volume-ca89e74a-519d-47ad-b263-a707adf4577c container client-container: <nil>
STEP: delete the pod
Mar 30 21:26:17.226: INFO: Waiting for pod downwardapi-volume-ca89e74a-519d-47ad-b263-a707adf4577c to disappear
Mar 30 21:26:17.255: INFO: Pod downwardapi-volume-ca89e74a-519d-47ad-b263-a707adf4577c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 30 21:26:17.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8211" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":20,"skipped":466,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Mar 30 21:27:08.315: INFO: Restart count of pod container-probe-9195/busybox-e10cdc8e-f2cc-4f61-bed0-f55733c3ec71 is now 1 (46.729480993s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 30 21:27:08.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9195" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":283,"completed":21,"skipped":502,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 38 lines ...
• [SLOW TEST:304.947 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":283,"completed":22,"skipped":517,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 21:32:13.398: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar 30 21:32:13.556: INFO: Waiting up to 5m0s for pod "pod-b00bd217-4b90-41f8-9dfc-9ee656a82d75" in namespace "emptydir-7466" to be "Succeeded or Failed"
Mar 30 21:32:13.587: INFO: Pod "pod-b00bd217-4b90-41f8-9dfc-9ee656a82d75": Phase="Pending", Reason="", readiness=false. Elapsed: 30.779318ms
Mar 30 21:32:15.616: INFO: Pod "pod-b00bd217-4b90-41f8-9dfc-9ee656a82d75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059997066s
STEP: Saw pod success
Mar 30 21:32:15.616: INFO: Pod "pod-b00bd217-4b90-41f8-9dfc-9ee656a82d75" satisfied condition "Succeeded or Failed"
Mar 30 21:32:15.646: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-b00bd217-4b90-41f8-9dfc-9ee656a82d75 container test-container: <nil>
STEP: delete the pod
Mar 30 21:32:15.729: INFO: Waiting for pod pod-b00bd217-4b90-41f8-9dfc-9ee656a82d75 to disappear
Mar 30 21:32:15.758: INFO: Pod pod-b00bd217-4b90-41f8-9dfc-9ee656a82d75 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 21:32:15.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7466" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":23,"skipped":557,"failed":0}

------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 21:32:16.003: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ce13393-22c8-4f61-819e-feb8b454f780" in namespace "downward-api-8836" to be "Succeeded or Failed"
Mar 30 21:32:16.039: INFO: Pod "downwardapi-volume-3ce13393-22c8-4f61-819e-feb8b454f780": Phase="Pending", Reason="", readiness=false. Elapsed: 35.526666ms
Mar 30 21:32:18.068: INFO: Pod "downwardapi-volume-3ce13393-22c8-4f61-819e-feb8b454f780": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064880092s
STEP: Saw pod success
Mar 30 21:32:18.068: INFO: Pod "downwardapi-volume-3ce13393-22c8-4f61-819e-feb8b454f780" satisfied condition "Succeeded or Failed"
Mar 30 21:32:18.098: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod downwardapi-volume-3ce13393-22c8-4f61-819e-feb8b454f780 container client-container: <nil>
STEP: delete the pod
Mar 30 21:32:18.169: INFO: Waiting for pod downwardapi-volume-3ce13393-22c8-4f61-819e-feb8b454f780 to disappear
Mar 30 21:32:18.199: INFO: Pod downwardapi-volume-3ce13393-22c8-4f61-819e-feb8b454f780 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 30 21:32:18.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8836" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":24,"skipped":557,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 36 lines ...

W0330 21:32:19.399566   26158 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 30 21:32:19.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6337" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":283,"completed":25,"skipped":571,"failed":0}
SS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 30 21:32:23.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7810" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":283,"completed":26,"skipped":573,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-298d27ba-8dde-4aee-8a47-bf95677e3f6a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 30 21:32:30.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3126" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":27,"skipped":601,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 22 lines ...
Mar 30 21:32:33.322: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Mar 30 21:32:33.322: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe pod agnhost-master-sfhdv --namespace=kubectl-6961'
Mar 30 21:32:33.583: INFO: stderr: ""
Mar 30 21:32:33.583: INFO: stdout: "Name:         agnhost-master-sfhdv\nNamespace:    kubectl-6961\nPriority:     0\nNode:         test1-md-0-m7pwl.c.kubernetes-es-logging.internal/10.150.0.4\nStart Time:   Mon, 30 Mar 2020 21:32:30 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  cni.projectcalico.org/podIP: 192.168.32.18/32\n              cni.projectcalico.org/podIPs: 192.168.32.18/32\nStatus:       Running\nIP:           192.168.32.18\nIPs:\n  IP:           192.168.32.18\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://e4525b7e12d2eb27926aa5764c88337895f31e80f057065881cc4114004787ed\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 30 Mar 2020 21:32:31 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm9bb (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-rm9bb:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-rm9bb\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                                                        Message\n  ----    ------     ----  ----                                                        -------\n  Normal  Scheduled  3s    default-scheduler                                           Successfully assigned kubectl-6961/agnhost-master-sfhdv to test1-md-0-m7pwl.c.kubernetes-es-logging.internal\n  Normal  Pulled     2s    kubelet, test1-md-0-m7pwl.c.kubernetes-es-logging.internal  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n  Normal  Created    2s    kubelet, test1-md-0-m7pwl.c.kubernetes-es-logging.internal  Created container agnhost-master\n  Normal  Started    2s    kubelet, test1-md-0-m7pwl.c.kubernetes-es-logging.internal  Started container agnhost-master\n"
Mar 30 21:32:33.583: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe rc agnhost-master --namespace=kubectl-6961'
Mar 30 21:32:33.861: INFO: stderr: ""
Mar 30 21:32:33.861: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-6961\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  3s    replication-controller  Created pod: agnhost-master-sfhdv\n"
Mar 30 21:32:33.861: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe service agnhost-master --namespace=kubectl-6961'
Mar 30 21:32:34.126: INFO: stderr: ""
Mar 30 21:32:34.126: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-6961\nLabels:            app=agnhost\n                   role=master\nAnnotations:       <none>\nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.102.164.224\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         192.168.32.18:6379\nSession Affinity:  None\nEvents:            <none>\n"
Mar 30 21:32:34.181: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe node test1-control-plane-qvzgv.c.kubernetes-es-logging.internal'
Mar 30 21:32:34.545: INFO: stderr: ""
Mar 30 21:32:34.545: INFO: stdout: "Name:               test1-control-plane-qvzgv.c.kubernetes-es-logging.internal\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=n1-standard-2\n                    beta.kubernetes.io/os=linux\n                    failure-domain.beta.kubernetes.io/region=us-east4\n                    failure-domain.beta.kubernetes.io/zone=us-east4-a\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=test1-control-plane-qvzgv.c.kubernetes-es-logging.internal\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\n                    node.kubernetes.io/instance-type=n1-standard-2\n                    topology.kubernetes.io/region=us-east4\n                    topology.kubernetes.io/zone=us-east4-a\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    projectcalico.org/IPv4Address: 10.150.0.2/32\n                    projectcalico.org/IPv4IPIPTunnelAddr: 192.168.197.192\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Mon, 30 Mar 2020 21:19:28 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  test1-control-plane-qvzgv.c.kubernetes-es-logging.internal\n  AcquireTime:     <unset>\n  RenewTime:       Mon, 30 Mar 2020 21:32:31 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Mon, 30 Mar 2020 21:20:02 +0000   Mon, 30 Mar 2020 21:20:02 +0000   CalicoIsUp                   Calico is running on this node\n  MemoryPressure       False   Mon, 30 Mar 2020 21:30:31 +0000   Mon, 30 Mar 2020 21:19:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Mon, 30 Mar 2020 21:30:31 +0000   Mon, 30 Mar 2020 21:19:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Mon, 30 Mar 2020 21:30:31 +0000   Mon, 30 Mar 2020 21:19:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Mon, 30 Mar 2020 21:30:31 +0000   Mon, 30 Mar 2020 21:20:01 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   10.150.0.2\n  ExternalIP:   \n  InternalDNS:  test1-control-plane-qvzgv.c.kubernetes-es-logging.internal\n  Hostname:     test1-control-plane-qvzgv.c.kubernetes-es-logging.internal\nCapacity:\n  attachable-volumes-gce-pd:  127\n  cpu:                        2\n  ephemeral-storage:          30308240Ki\n  hugepages-1Gi:              0\n  hugepages-2Mi:              0\n  memory:                     7648892Ki\n  pods:                       110\nAllocatable:\n  attachable-volumes-gce-pd:  127\n  cpu:                        2\n  ephemeral-storage:          27932073938\n  hugepages-1Gi:              0\n  hugepages-2Mi:              0\n  memory:                     7546492Ki\n  pods:                       110\nSystem Info:\n  Machine ID:                 eb2aa64f3217fd5ba809056d85734fa7\n  System UUID:                eb2aa64f-3217-fd5b-a809-056d85734fa7\n  Boot ID:                    b56a9e3a-4141-470a-a983-05161ae9051f\n  Kernel Version:             5.0.0-1033-gcp\n  OS Image:                   Ubuntu 18.04.4 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3\n  Kubelet Version:            v1.17.4\n  Kube-Proxy Version:         v1.17.4\nProviderID:                   gce://kubernetes-es-logging/us-east4-a/test1-control-plane-qvzgv\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                                                                  ------------  ----------  ---------------  -------------  ---\n  kube-system                 calico-kube-controllers-788d6b9876-762jd                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m\n  kube-system                 calico-node-t99h2                                                                     250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m\n  kube-system                 coredns-6955765f44-t4j9h                                                              100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m\n  kube-system                 coredns-6955765f44-w9cdd                                                              100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m\n  kube-system                 etcd-test1-control-plane-qvzgv.c.kubernetes-es-logging.internal                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m\n  kube-system                 kube-apiserver-test1-control-plane-qvzgv.c.kubernetes-es-logging.internal             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m\n  kube-system                 kube-controller-manager-test1-control-plane-qvzgv.c.kubernetes-es-logging.internal    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m\n  kube-system                 kube-proxy-4dbxh                                                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m\n  kube-system                 kube-scheduler-test1-control-plane-qvzgv.c.kubernetes-es-logging.internal             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                   Requests    Limits\n  --------                   --------    ------\n  cpu                        1 (50%)     0 (0%)\n  memory                     140Mi (1%)  340Mi (4%)\n  ephemeral-storage          0 (0%)      0 (0%)\n  hugepages-1Gi              0 (0%)      0 (0%)\n  hugepages-2Mi              0 (0%)      0 (0%)\n  attachable-volumes-gce-pd  0           0\nEvents:\n  Type     Reason                   Age   From                                                                    Message\n  ----     ------                   ----  ----                                                                    -------\n  Normal   Starting                 13m   kubelet, test1-control-plane-qvzgv.c.kubernetes-es-logging.internal     Starting kubelet.\n  Warning  InvalidDiskCapacity      13m   kubelet, test1-control-plane-qvzgv.c.kubernetes-es-logging.internal     invalid capacity 0 on image filesystem\n  Normal   NodeHasSufficientMemory  13m   kubelet, test1-control-plane-qvzgv.c.kubernetes-es-logging.internal     Node test1-control-plane-qvzgv.c.kubernetes-es-logging.internal status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure    13m   kubelet, test1-control-plane-qvzgv.c.kubernetes-es-logging.internal     Node test1-control-plane-qvzgv.c.kubernetes-es-logging.internal status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID     13m   kubelet, test1-control-plane-qvzgv.c.kubernetes-es-logging.internal     Node test1-control-plane-qvzgv.c.kubernetes-es-logging.internal status is now: NodeHasSufficientPID\n  Normal   NodeAllocatableEnforced  13m   kubelet, test1-control-plane-qvzgv.c.kubernetes-es-logging.internal     Updated Node Allocatable limit across pods\n  Normal   Starting                 12m   kube-proxy, test1-control-plane-qvzgv.c.kubernetes-es-logging.internal  Starting kube-proxy.\n  Normal   NodeReady                12m   kubelet, test1-control-plane-qvzgv.c.kubernetes-es-logging.internal     Node test1-control-plane-qvzgv.c.kubernetes-es-logging.internal status is now: NodeReady\n"
Mar 30 21:32:34.546: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe namespace kubectl-6961'
Mar 30 21:32:34.828: INFO: stderr: ""
Mar 30 21:32:34.828: INFO: stdout: "Name:         kubectl-6961\nLabels:       e2e-framework=kubectl\n              e2e-run=26702e33-9477-43de-a239-0bfecd00b78b\nAnnotations:  <none>\nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 21:32:34.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6961" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":283,"completed":28,"skipped":618,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 132 lines ...
Mar 30 21:33:00.680: INFO: stderr: ""
Mar 30 21:33:00.680: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 21:33:00.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9740" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":283,"completed":29,"skipped":638,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 61 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 30 21:33:04.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5491" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":283,"completed":30,"skipped":680,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Mar 30 21:33:05.028: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 30 21:33:08.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7682" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":283,"completed":31,"skipped":719,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 8 lines ...
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 30 21:33:15.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9692" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":283,"completed":32,"skipped":727,"failed":0}
SSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Pods Extended
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  test/e2e/framework/framework.go:175
Mar 30 21:33:16.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4720" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":283,"completed":33,"skipped":730,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 105 lines ...
<a href="btmp">btmp</a>
<a href="ch... (200; 33.607377ms)
[AfterEach] version v1
  test/e2e/framework/framework.go:175
Mar 30 21:33:16.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3163" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":283,"completed":34,"skipped":783,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-zn9b
STEP: Creating a pod to test atomic-volume-subpath
Mar 30 21:33:17.182: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zn9b" in namespace "subpath-9367" to be "Succeeded or Failed"
Mar 30 21:33:17.211: INFO: Pod "pod-subpath-test-configmap-zn9b": Phase="Pending", Reason="", readiness=false. Elapsed: 29.290054ms
Mar 30 21:33:19.242: INFO: Pod "pod-subpath-test-configmap-zn9b": Phase="Running", Reason="", readiness=true. Elapsed: 2.059571872s
Mar 30 21:33:21.272: INFO: Pod "pod-subpath-test-configmap-zn9b": Phase="Running", Reason="", readiness=true. Elapsed: 4.090015686s
Mar 30 21:33:23.302: INFO: Pod "pod-subpath-test-configmap-zn9b": Phase="Running", Reason="", readiness=true. Elapsed: 6.119444243s
Mar 30 21:33:25.333: INFO: Pod "pod-subpath-test-configmap-zn9b": Phase="Running", Reason="", readiness=true. Elapsed: 8.150720884s
Mar 30 21:33:27.363: INFO: Pod "pod-subpath-test-configmap-zn9b": Phase="Running", Reason="", readiness=true. Elapsed: 10.181018047s
Mar 30 21:33:29.395: INFO: Pod "pod-subpath-test-configmap-zn9b": Phase="Running", Reason="", readiness=true. Elapsed: 12.212441234s
Mar 30 21:33:31.424: INFO: Pod "pod-subpath-test-configmap-zn9b": Phase="Running", Reason="", readiness=true. Elapsed: 14.242175338s
Mar 30 21:33:33.490: INFO: Pod "pod-subpath-test-configmap-zn9b": Phase="Running", Reason="", readiness=true. Elapsed: 16.307572614s
Mar 30 21:33:35.521: INFO: Pod "pod-subpath-test-configmap-zn9b": Phase="Running", Reason="", readiness=true. Elapsed: 18.338385341s
Mar 30 21:33:37.550: INFO: Pod "pod-subpath-test-configmap-zn9b": Phase="Running", Reason="", readiness=true. Elapsed: 20.367830657s
Mar 30 21:33:39.580: INFO: Pod "pod-subpath-test-configmap-zn9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.397517434s
STEP: Saw pod success
Mar 30 21:33:39.580: INFO: Pod "pod-subpath-test-configmap-zn9b" satisfied condition "Succeeded or Failed"
Mar 30 21:33:39.609: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-subpath-test-configmap-zn9b container test-container-subpath-configmap-zn9b: <nil>
STEP: delete the pod
Mar 30 21:33:39.684: INFO: Waiting for pod pod-subpath-test-configmap-zn9b to disappear
Mar 30 21:33:39.714: INFO: Pod pod-subpath-test-configmap-zn9b no longer exists
STEP: Deleting pod pod-subpath-test-configmap-zn9b
Mar 30 21:33:39.714: INFO: Deleting pod "pod-subpath-test-configmap-zn9b" in namespace "subpath-9367"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 30 21:33:39.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9367" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":283,"completed":35,"skipped":796,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-3f1cb165-9b40-4934-aa6e-6ceb583a7d08
STEP: Creating a pod to test consume secrets
Mar 30 21:33:40.023: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9071eddb-733b-4bdc-8598-7aebe23ba47f" in namespace "projected-9972" to be "Succeeded or Failed"
Mar 30 21:33:40.053: INFO: Pod "pod-projected-secrets-9071eddb-733b-4bdc-8598-7aebe23ba47f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.133307ms
Mar 30 21:33:42.083: INFO: Pod "pod-projected-secrets-9071eddb-733b-4bdc-8598-7aebe23ba47f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060109665s
STEP: Saw pod success
Mar 30 21:33:42.084: INFO: Pod "pod-projected-secrets-9071eddb-733b-4bdc-8598-7aebe23ba47f" satisfied condition "Succeeded or Failed"
Mar 30 21:33:42.113: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-projected-secrets-9071eddb-733b-4bdc-8598-7aebe23ba47f container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 30 21:33:42.183: INFO: Waiting for pod pod-projected-secrets-9071eddb-733b-4bdc-8598-7aebe23ba47f to disappear
Mar 30 21:33:42.213: INFO: Pod pod-projected-secrets-9071eddb-733b-4bdc-8598-7aebe23ba47f no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 30 21:33:42.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9972" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":36,"skipped":815,"failed":0}

------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 21:33:50.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-4049" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":283,"completed":37,"skipped":815,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 30 21:33:50.733: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename webhook
... skipping 5 lines ...
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 30 21:33:51.766: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200831, loc:(*time.Location)(0x7b57f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200831, loc:(*time.Location)(0x7b57f20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200831, loc:(*time.Location)(0x7b57f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200831, loc:(*time.Location)(0x7b57f20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 30 21:33:54.836: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 21:33:55.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3243" for this suite.
STEP: Destroying namespace "webhook-3243-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":283,"completed":38,"skipped":827,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
Mar 30 21:34:04.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9717" for this suite.
STEP: Destroying namespace "webhook-9717-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":283,"completed":39,"skipped":834,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
Mar 30 21:34:19.098: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 30 21:34:23.586: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 21:34:38.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2964" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":283,"completed":40,"skipped":877,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 26 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 30 21:34:46.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2014" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":283,"completed":41,"skipped":889,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 21:34:46.632: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b93634e7-d342-4317-be76-260b1309dd48" in namespace "projected-3959" to be "Succeeded or Failed"
Mar 30 21:34:46.663: INFO: Pod "downwardapi-volume-b93634e7-d342-4317-be76-260b1309dd48": Phase="Pending", Reason="", readiness=false. Elapsed: 31.098234ms
Mar 30 21:34:48.694: INFO: Pod "downwardapi-volume-b93634e7-d342-4317-be76-260b1309dd48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061547803s
STEP: Saw pod success
Mar 30 21:34:48.694: INFO: Pod "downwardapi-volume-b93634e7-d342-4317-be76-260b1309dd48" satisfied condition "Succeeded or Failed"
Mar 30 21:34:48.725: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod downwardapi-volume-b93634e7-d342-4317-be76-260b1309dd48 container client-container: <nil>
STEP: delete the pod
Mar 30 21:34:48.796: INFO: Waiting for pod downwardapi-volume-b93634e7-d342-4317-be76-260b1309dd48 to disappear
Mar 30 21:34:48.825: INFO: Pod downwardapi-volume-b93634e7-d342-4317-be76-260b1309dd48 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 30 21:34:48.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3959" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":283,"completed":42,"skipped":896,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 30 21:34:48.919: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 30 21:34:49.125: INFO: Waiting up to 5m0s for pod "downward-api-1c1cdfd8-852d-43f1-aaef-cbad93bf22af" in namespace "downward-api-5428" to be "Succeeded or Failed"
Mar 30 21:34:49.154: INFO: Pod "downward-api-1c1cdfd8-852d-43f1-aaef-cbad93bf22af": Phase="Pending", Reason="", readiness=false. Elapsed: 29.084627ms
Mar 30 21:34:51.184: INFO: Pod "downward-api-1c1cdfd8-852d-43f1-aaef-cbad93bf22af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059896666s
STEP: Saw pod success
Mar 30 21:34:51.185: INFO: Pod "downward-api-1c1cdfd8-852d-43f1-aaef-cbad93bf22af" satisfied condition "Succeeded or Failed"
Mar 30 21:34:51.216: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod downward-api-1c1cdfd8-852d-43f1-aaef-cbad93bf22af container dapi-container: <nil>
STEP: delete the pod
Mar 30 21:34:51.286: INFO: Waiting for pod downward-api-1c1cdfd8-852d-43f1-aaef-cbad93bf22af to disappear
Mar 30 21:34:51.315: INFO: Pod downward-api-1c1cdfd8-852d-43f1-aaef-cbad93bf22af no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 30 21:34:51.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5428" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":283,"completed":43,"skipped":936,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 8 lines ...
Mar 30 21:34:51.740: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f454e3c7-2ae1-4571-9beb-4163a45b8058", Controller:(*bool)(0xc003e6d5e6), BlockOwnerDeletion:(*bool)(0xc003e6d5e7)}}
Mar 30 21:34:51.775: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"fd03da61-b086-410a-99b0-8f74bf910c72", Controller:(*bool)(0xc003fcdfe6), BlockOwnerDeletion:(*bool)(0xc003fcdfe7)}}
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 30 21:34:56.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2645" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":283,"completed":44,"skipped":941,"failed":0}
SSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Lease
... skipping 5 lines ...
[It] lease API should be available [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Lease
  test/e2e/framework/framework.go:175
Mar 30 21:34:57.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-6876" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":283,"completed":45,"skipped":946,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 22 lines ...
Mar 30 21:35:38.055: INFO: Waiting for statefulset status.replicas updated to 0
Mar 30 21:35:38.084: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 30 21:35:38.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8834" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":283,"completed":46,"skipped":963,"failed":0}
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 62 lines ...
Mar 30 21:36:02.388: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7517/pods","resourceVersion":"5217"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 30 21:36:02.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7517" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":283,"completed":47,"skipped":966,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 79 lines ...
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-znmkw webserver-deployment-595b5b9587- deployment-0 /api/v1/namespaces/deployment-0/pods/webserver-deployment-595b5b9587-znmkw 24786b67-8aee-439d-9cb5-2949fdf54408 5380 0 2020-03-30 21:36:02 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:192.168.32.35/32 cni.projectcalico.org/podIPs:192.168.32.35/32] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 499e7d05-08fb-4725-9739-c3a65a4012df 0xc00314a500 0xc00314a501}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bnmbs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bnmbs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bnmbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-m7pwl.c.kubernetes-es-logging.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:192.168.32.35,StartTime:2020-03-30 21:36:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-30 21:36:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4b6d671e320b88161e095a2383e034fba0487981838c08a545e148f651ec62cf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.32.35,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 30 21:36:12.804: INFO: Pod "webserver-deployment-c7997dcc8-7z49l" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7z49l webserver-deployment-c7997dcc8- deployment-0 /api/v1/namespaces/deployment-0/pods/webserver-deployment-c7997dcc8-7z49l fc917df8-3779-4e13-8965-fb3e92701fb6 5766 0 2020-03-30 21:36:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.154.223/32 cni.projectcalico.org/podIPs:192.168.154.223/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2aaf83c5-94c3-4d42-a621-799d4ac30626 0xc00314a840 0xc00314a841}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bnmbs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bnmbs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bnmbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-nfkzj.c.kubernetes-es-logging.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:192.168.154.223,StartTime:2020-03-30 21:36:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "webserver:404",},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.154.223,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 30 21:36:12.805: INFO: Pod "webserver-deployment-c7997dcc8-8zqzt" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8zqzt webserver-deployment-c7997dcc8- deployment-0 /api/v1/namespaces/deployment-0/pods/webserver-deployment-c7997dcc8-8zqzt 077d5503-8c34-4d92-85a8-de241c1c1d24 5774 0 2020-03-30 21:36:10 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.154.233/32 cni.projectcalico.org/podIPs:192.168.154.233/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2aaf83c5-94c3-4d42-a621-799d4ac30626 0xc00314ab60 0xc00314ab61}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bnmbs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bnmbs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bnmbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-nfkzj.c.kubernetes-es-logging.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 30 21:36:12.805: INFO: Pod "webserver-deployment-c7997dcc8-bdndj" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bdndj webserver-deployment-c7997dcc8- deployment-0 /api/v1/namespaces/deployment-0/pods/webserver-deployment-c7997dcc8-bdndj 1a477f9e-7095-4bf9-99db-23b14f1ec4ae 5513 0 2020-03-30 21:36:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.154.222/32 cni.projectcalico.org/podIPs:192.168.154.222/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2aaf83c5-94c3-4d42-a621-799d4ac30626 0xc00314acf0 0xc00314acf1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bnmbs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bnmbs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bnmbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-nfkzj.c.kubernetes-es-logging.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:192.168.154.222,StartTime:2020-03-30 21:36:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.154.222,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 30 21:36:12.805: INFO: Pod "webserver-deployment-c7997dcc8-cg58x" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cg58x webserver-deployment-c7997dcc8- deployment-0 /api/v1/namespaces/deployment-0/pods/webserver-deployment-c7997dcc8-cg58x 70b2496c-dae8-4373-b1dc-e66449b87f3e 5779 0 2020-03-30 21:36:10 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.32.46/32 cni.projectcalico.org/podIPs:192.168.32.46/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2aaf83c5-94c3-4d42-a621-799d4ac30626 0xc00314af90 0xc00314af91}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bnmbs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bnmbs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bnmbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-m7pwl.c.kubernetes-es-logging.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:,StartTime:2020-03-30 21:36:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 30 21:36:12.805: INFO: Pod "webserver-deployment-c7997dcc8-hv5qm" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hv5qm webserver-deployment-c7997dcc8- deployment-0 /api/v1/namespaces/deployment-0/pods/webserver-deployment-c7997dcc8-hv5qm 43b9b316-3f62-42b3-b6e2-79c864b7009e 5758 0 2020-03-30 21:36:10 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.154.231/32 cni.projectcalico.org/podIPs:192.168.154.231/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2aaf83c5-94c3-4d42-a621-799d4ac30626 0xc00314b1b0 0xc00314b1b1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bnmbs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bnmbs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bnmbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-nfkzj.c.kubernetes-es-logging.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:,StartTime:2020-03-30 21:36:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 30 21:36:12.806: INFO: Pod "webserver-deployment-c7997dcc8-jkp4j" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jkp4j webserver-deployment-c7997dcc8- deployment-0 /api/v1/namespaces/deployment-0/pods/webserver-deployment-c7997dcc8-jkp4j 83163aa6-05b5-4cd9-ba3a-65eab9286058 5773 0 2020-03-30 21:36:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.32.36/32 cni.projectcalico.org/podIPs:192.168.32.36/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2aaf83c5-94c3-4d42-a621-799d4ac30626 0xc00314b510 0xc00314b511}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bnmbs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bnmbs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bnmbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-m7pwl.c.kubernetes-es-logging.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:192.168.32.36,StartTime:2020-03-30 21:36:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "webserver:404",},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.32.36,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 3 lines ...
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kffxr webserver-deployment-c7997dcc8- deployment-0 /api/v1/namespaces/deployment-0/pods/webserver-deployment-c7997dcc8-kffxr 553cea9d-45c8-40f0-a669-5e1f3e17ec87 5719 0 2020-03-30 21:36:10 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.32.42/32 cni.projectcalico.org/podIPs:192.168.32.42/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2aaf83c5-94c3-4d42-a621-799d4ac30626 0xc00314ba40 0xc00314ba41}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bnmbs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bnmbs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bnmbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-m7pwl.c.kubernetes-es-logging.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:,StartTime:2020-03-30 21:36:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 30 21:36:12.806: INFO: Pod "webserver-deployment-c7997dcc8-pttv6" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pttv6 webserver-deployment-c7997dcc8- deployment-0 /api/v1/namespaces/deployment-0/pods/webserver-deployment-c7997dcc8-pttv6 217308df-6a60-4820-9fc6-cdff0c78ba17 5769 0 2020-03-30 21:36:10 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.154.232/32 cni.projectcalico.org/podIPs:192.168.154.232/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2aaf83c5-94c3-4d42-a621-799d4ac30626 0xc00314bc50 0xc00314bc51}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bnmbs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bnmbs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bnmbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-nfkzj.c.kubernetes-es-logging.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:,StartTime:2020-03-30 21:36:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 30 21:36:12.808: INFO: Pod "webserver-deployment-c7997dcc8-q9c9l" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-q9c9l webserver-deployment-c7997dcc8- deployment-0 /api/v1/namespaces/deployment-0/pods/webserver-deployment-c7997dcc8-q9c9l 3e2bddd1-4517-42c0-80a8-6f87e552d670 5726 0 2020-03-30 21:36:10 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.32.43/32 cni.projectcalico.org/podIPs:192.168.32.43/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2aaf83c5-94c3-4d42-a621-799d4ac30626 0xc00314bf00 0xc00314bf01}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bnmbs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bnmbs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bnmbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-m7pwl.c.kubernetes-es-logging.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:,StartTime:2020-03-30 21:36:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 30 21:36:12.808: INFO: Pod "webserver-deployment-c7997dcc8-w5n9r" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-w5n9r webserver-deployment-c7997dcc8- deployment-0 /api/v1/namespaces/deployment-0/pods/webserver-deployment-c7997dcc8-w5n9r 95f4bc02-290c-40c7-81e3-4717cbd3b38a 5507 0 2020-03-30 21:36:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.32.37/32 cni.projectcalico.org/podIPs:192.168.32.37/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2aaf83c5-94c3-4d42-a621-799d4ac30626 0xc0031262d0 0xc0031262d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bnmbs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bnmbs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bnmbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-m7pwl.c.kubernetes-es-logging.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:192.168.32.37,StartTime:2020-03-30 21:36:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.32.37,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 30 21:36:12.808: INFO: Pod "webserver-deployment-c7997dcc8-wqbcs" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wqbcs webserver-deployment-c7997dcc8- deployment-0 /api/v1/namespaces/deployment-0/pods/webserver-deployment-c7997dcc8-wqbcs f319644e-d96c-4d5a-84e5-f342bed47cd2 5684 0 2020-03-30 21:36:10 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.32.40/32 cni.projectcalico.org/podIPs:192.168.32.40/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2aaf83c5-94c3-4d42-a621-799d4ac30626 0xc0031265e0 0xc0031265e1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bnmbs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bnmbs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bnmbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-m7pwl.c.kubernetes-es-logging.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 30 21:36:12.808: INFO: Pod "webserver-deployment-c7997dcc8-z8x8z" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-z8x8z webserver-deployment-c7997dcc8- deployment-0 /api/v1/namespaces/deployment-0/pods/webserver-deployment-c7997dcc8-z8x8z 17a9bb8a-fd3f-481b-b632-cf1470a7354b 5520 0 2020-03-30 21:36:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.154.221/32 cni.projectcalico.org/podIPs:192.168.154.221/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2aaf83c5-94c3-4d42-a621-799d4ac30626 0xc003126740 0xc003126741}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bnmbs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bnmbs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bnmbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-nfkzj.c.kubernetes-es-logging.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:36:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:192.168.154.221,StartTime:2020-03-30 21:36:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.154.221,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 30 21:36:12.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-0" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":283,"completed":48,"skipped":1011,"failed":0}

------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should be possible to delete [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 30 21:36:13.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8159" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":283,"completed":49,"skipped":1011,"failed":0}
S
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 12 lines ...
Mar 30 21:36:23.852: INFO: Terminating Job.batch foo pods took: 400.304183ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Mar 30 21:37:02.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2564" for this suite.
•{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":283,"completed":50,"skipped":1012,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 25 lines ...
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 30 21:37:07.131: INFO: File wheezy_udp@dns-test-service-3.dns-7835.svc.cluster.local from pod  dns-7835/dns-test-6f23c578-71ee-4362-a2e6-e9d91e1e8c7f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 21:37:07.161: INFO: File jessie_udp@dns-test-service-3.dns-7835.svc.cluster.local from pod  dns-7835/dns-test-6f23c578-71ee-4362-a2e6-e9d91e1e8c7f contains '' instead of 'bar.example.com.'
Mar 30 21:37:07.161: INFO: Lookups using dns-7835/dns-test-6f23c578-71ee-4362-a2e6-e9d91e1e8c7f failed for: [wheezy_udp@dns-test-service-3.dns-7835.svc.cluster.local jessie_udp@dns-test-service-3.dns-7835.svc.cluster.local]

Mar 30 21:37:12.193: INFO: File wheezy_udp@dns-test-service-3.dns-7835.svc.cluster.local from pod  dns-7835/dns-test-6f23c578-71ee-4362-a2e6-e9d91e1e8c7f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 21:37:12.225: INFO: File jessie_udp@dns-test-service-3.dns-7835.svc.cluster.local from pod  dns-7835/dns-test-6f23c578-71ee-4362-a2e6-e9d91e1e8c7f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 21:37:12.225: INFO: Lookups using dns-7835/dns-test-6f23c578-71ee-4362-a2e6-e9d91e1e8c7f failed for: [wheezy_udp@dns-test-service-3.dns-7835.svc.cluster.local jessie_udp@dns-test-service-3.dns-7835.svc.cluster.local]

Mar 30 21:37:17.192: INFO: File wheezy_udp@dns-test-service-3.dns-7835.svc.cluster.local from pod  dns-7835/dns-test-6f23c578-71ee-4362-a2e6-e9d91e1e8c7f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 21:37:17.224: INFO: File jessie_udp@dns-test-service-3.dns-7835.svc.cluster.local from pod  dns-7835/dns-test-6f23c578-71ee-4362-a2e6-e9d91e1e8c7f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 21:37:17.224: INFO: Lookups using dns-7835/dns-test-6f23c578-71ee-4362-a2e6-e9d91e1e8c7f failed for: [wheezy_udp@dns-test-service-3.dns-7835.svc.cluster.local jessie_udp@dns-test-service-3.dns-7835.svc.cluster.local]

Mar 30 21:37:22.193: INFO: File wheezy_udp@dns-test-service-3.dns-7835.svc.cluster.local from pod  dns-7835/dns-test-6f23c578-71ee-4362-a2e6-e9d91e1e8c7f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 21:37:22.223: INFO: File jessie_udp@dns-test-service-3.dns-7835.svc.cluster.local from pod  dns-7835/dns-test-6f23c578-71ee-4362-a2e6-e9d91e1e8c7f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 21:37:22.223: INFO: Lookups using dns-7835/dns-test-6f23c578-71ee-4362-a2e6-e9d91e1e8c7f failed for: [wheezy_udp@dns-test-service-3.dns-7835.svc.cluster.local jessie_udp@dns-test-service-3.dns-7835.svc.cluster.local]

Mar 30 21:37:27.194: INFO: File wheezy_udp@dns-test-service-3.dns-7835.svc.cluster.local from pod  dns-7835/dns-test-6f23c578-71ee-4362-a2e6-e9d91e1e8c7f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 21:37:27.224: INFO: File jessie_udp@dns-test-service-3.dns-7835.svc.cluster.local from pod  dns-7835/dns-test-6f23c578-71ee-4362-a2e6-e9d91e1e8c7f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 21:37:27.224: INFO: Lookups using dns-7835/dns-test-6f23c578-71ee-4362-a2e6-e9d91e1e8c7f failed for: [wheezy_udp@dns-test-service-3.dns-7835.svc.cluster.local jessie_udp@dns-test-service-3.dns-7835.svc.cluster.local]

Mar 30 21:37:32.194: INFO: File wheezy_udp@dns-test-service-3.dns-7835.svc.cluster.local from pod  dns-7835/dns-test-6f23c578-71ee-4362-a2e6-e9d91e1e8c7f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 21:37:32.225: INFO: File jessie_udp@dns-test-service-3.dns-7835.svc.cluster.local from pod  dns-7835/dns-test-6f23c578-71ee-4362-a2e6-e9d91e1e8c7f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 21:37:32.225: INFO: Lookups using dns-7835/dns-test-6f23c578-71ee-4362-a2e6-e9d91e1e8c7f failed for: [wheezy_udp@dns-test-service-3.dns-7835.svc.cluster.local jessie_udp@dns-test-service-3.dns-7835.svc.cluster.local]

Mar 30 21:37:37.226: INFO: DNS probes using dns-test-6f23c578-71ee-4362-a2e6-e9d91e1e8c7f succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7835.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7835.svc.cluster.local; sleep 1; done
... skipping 9 lines ...
STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 30 21:37:39.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7835" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":283,"completed":51,"skipped":1029,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] 
  removing taint cancels eviction [Disruptive] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
... skipping 20 lines ...
STEP: Waiting some time to make sure that toleration time passed.
Mar 30 21:39:55.413: INFO: Pod wasn't evicted. Test successful
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
  test/e2e/framework/framework.go:175
Mar 30 21:39:55.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-single-pod-8260" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":283,"completed":52,"skipped":1037,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 30 21:40:11.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1074" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":283,"completed":53,"skipped":1082,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 12 lines ...
Mar 30 21:40:14.284: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 30 21:40:14.521: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 21:40:14.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4804" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":283,"completed":54,"skipped":1087,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 51 lines ...
Mar 30 21:40:22.969: INFO: stderr: ""
Mar 30 21:40:22.969: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 21:40:22.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6221" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":283,"completed":55,"skipped":1089,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
Mar 30 21:40:28.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9273" for this suite.
STEP: Destroying namespace "webhook-9273-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":283,"completed":56,"skipped":1116,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 21:40:29.158: INFO: Waiting up to 5m0s for pod "downwardapi-volume-94ab0877-88dd-4368-9972-5dd12e761314" in namespace "downward-api-4419" to be "Succeeded or Failed"
Mar 30 21:40:29.191: INFO: Pod "downwardapi-volume-94ab0877-88dd-4368-9972-5dd12e761314": Phase="Pending", Reason="", readiness=false. Elapsed: 32.291264ms
Mar 30 21:40:31.221: INFO: Pod "downwardapi-volume-94ab0877-88dd-4368-9972-5dd12e761314": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062376237s
STEP: Saw pod success
Mar 30 21:40:31.221: INFO: Pod "downwardapi-volume-94ab0877-88dd-4368-9972-5dd12e761314" satisfied condition "Succeeded or Failed"
Mar 30 21:40:31.250: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod downwardapi-volume-94ab0877-88dd-4368-9972-5dd12e761314 container client-container: <nil>
STEP: delete the pod
Mar 30 21:40:31.322: INFO: Waiting for pod downwardapi-volume-94ab0877-88dd-4368-9972-5dd12e761314 to disappear
Mar 30 21:40:31.351: INFO: Pod downwardapi-volume-94ab0877-88dd-4368-9972-5dd12e761314 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 30 21:40:31.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4419" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":283,"completed":57,"skipped":1147,"failed":0}
SS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Aggregator
... skipping 18 lines ...
[AfterEach] [sig-api-machinery] Aggregator
  test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  test/e2e/framework/framework.go:175
Mar 30 21:40:46.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-4660" for this suite.
•{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":283,"completed":58,"skipped":1149,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Mar 30 21:40:49.765: INFO: Successfully updated pod "annotationupdate2053a655-d861-4d04-a060-1acda3eaef7f"
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 30 21:40:51.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3858" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":283,"completed":59,"skipped":1176,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 338 lines ...
Mar 30 21:40:58.472: INFO: Deleting ReplicationController proxy-service-4dpxf took: 33.482833ms
Mar 30 21:40:58.573: INFO: Terminating ReplicationController proxy-service-4dpxf pods took: 100.238589ms
[AfterEach] version v1
  test/e2e/framework/framework.go:175
Mar 30 21:41:10.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7627" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":283,"completed":60,"skipped":1196,"failed":0}
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 30 21:41:13.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9944" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":61,"skipped":1202,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-f8c2053e-5bb2-43ca-afed-ba26ca26dde5
STEP: Creating a pod to test consume configMaps
Mar 30 21:41:13.335: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1a52103b-bc9a-4b2e-9705-6d1d9e7fdd78" in namespace "projected-7042" to be "Succeeded or Failed"
Mar 30 21:41:13.370: INFO: Pod "pod-projected-configmaps-1a52103b-bc9a-4b2e-9705-6d1d9e7fdd78": Phase="Pending", Reason="", readiness=false. Elapsed: 34.333224ms
Mar 30 21:41:15.400: INFO: Pod "pod-projected-configmaps-1a52103b-bc9a-4b2e-9705-6d1d9e7fdd78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063996528s
STEP: Saw pod success
Mar 30 21:41:15.400: INFO: Pod "pod-projected-configmaps-1a52103b-bc9a-4b2e-9705-6d1d9e7fdd78" satisfied condition "Succeeded or Failed"
Mar 30 21:41:15.429: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-projected-configmaps-1a52103b-bc9a-4b2e-9705-6d1d9e7fdd78 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 21:41:15.501: INFO: Waiting for pod pod-projected-configmaps-1a52103b-bc9a-4b2e-9705-6d1d9e7fdd78 to disappear
Mar 30 21:41:15.530: INFO: Pod pod-projected-configmaps-1a52103b-bc9a-4b2e-9705-6d1d9e7fdd78 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 30 21:41:15.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7042" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":62,"skipped":1211,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 60 lines ...
Mar 30 21:41:23.023: INFO: stderr: ""
Mar 30 21:41:23.023: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 21:41:23.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8137" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":283,"completed":63,"skipped":1229,"failed":0}
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-2fa78343-3cb6-4ebc-af16-0c53a61c15a4
STEP: Creating a pod to test consume secrets
Mar 30 21:41:23.338: INFO: Waiting up to 5m0s for pod "pod-secrets-3d49d906-f76c-428a-9140-5ae12ad0f719" in namespace "secrets-2096" to be "Succeeded or Failed"
Mar 30 21:41:23.368: INFO: Pod "pod-secrets-3d49d906-f76c-428a-9140-5ae12ad0f719": Phase="Pending", Reason="", readiness=false. Elapsed: 30.295367ms
Mar 30 21:41:25.398: INFO: Pod "pod-secrets-3d49d906-f76c-428a-9140-5ae12ad0f719": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060557796s
STEP: Saw pod success
Mar 30 21:41:25.398: INFO: Pod "pod-secrets-3d49d906-f76c-428a-9140-5ae12ad0f719" satisfied condition "Succeeded or Failed"
Mar 30 21:41:25.428: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-secrets-3d49d906-f76c-428a-9140-5ae12ad0f719 container secret-volume-test: <nil>
STEP: delete the pod
Mar 30 21:41:25.500: INFO: Waiting for pod pod-secrets-3d49d906-f76c-428a-9140-5ae12ad0f719 to disappear
Mar 30 21:41:25.529: INFO: Pod pod-secrets-3d49d906-f76c-428a-9140-5ae12ad0f719 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 30 21:41:25.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2096" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":64,"skipped":1233,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 21:41:25.617: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 30 21:41:25.774: INFO: Waiting up to 5m0s for pod "pod-c7ee9ee1-70af-442c-8d95-c9dcd363d335" in namespace "emptydir-4480" to be "Succeeded or Failed"
Mar 30 21:41:25.807: INFO: Pod "pod-c7ee9ee1-70af-442c-8d95-c9dcd363d335": Phase="Pending", Reason="", readiness=false. Elapsed: 32.115301ms
Mar 30 21:41:27.837: INFO: Pod "pod-c7ee9ee1-70af-442c-8d95-c9dcd363d335": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062342426s
STEP: Saw pod success
Mar 30 21:41:27.837: INFO: Pod "pod-c7ee9ee1-70af-442c-8d95-c9dcd363d335" satisfied condition "Succeeded or Failed"
Mar 30 21:41:27.866: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-c7ee9ee1-70af-442c-8d95-c9dcd363d335 container test-container: <nil>
STEP: delete the pod
Mar 30 21:41:27.938: INFO: Waiting for pod pod-c7ee9ee1-70af-442c-8d95-c9dcd363d335 to disappear
Mar 30 21:41:27.966: INFO: Pod pod-c7ee9ee1-70af-442c-8d95-c9dcd363d335 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 21:41:27.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4480" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":65,"skipped":1240,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
... skipping 11 lines ...
Mar 30 21:41:30.422: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:175
Mar 30 21:41:30.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5001" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":283,"completed":66,"skipped":1271,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 10 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 30 21:41:30.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3812" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":283,"completed":67,"skipped":1299,"failed":0}
SSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 9 lines ...
Mar 30 21:41:31.037: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 30 21:41:31.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4891" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":283,"completed":68,"skipped":1302,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Mar 30 21:41:33.611: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:33.641: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:33.736: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:33.766: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:33.797: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:33.829: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:33.891: INFO: Lookups using dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8069.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8069.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local jessie_udp@dns-test-service-2.dns-8069.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8069.svc.cluster.local]

Mar 30 21:41:38.924: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:38.961: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:38.992: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:39.023: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:39.119: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:39.151: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:39.182: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:39.213: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:39.275: INFO: Lookups using dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8069.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8069.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local jessie_udp@dns-test-service-2.dns-8069.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8069.svc.cluster.local]

Mar 30 21:41:43.922: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:43.953: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:43.984: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:44.015: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:44.108: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:44.139: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:44.170: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:44.201: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:44.262: INFO: Lookups using dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8069.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8069.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local jessie_udp@dns-test-service-2.dns-8069.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8069.svc.cluster.local]

Mar 30 21:41:48.930: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:48.962: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:48.996: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:49.028: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:49.125: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:49.156: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:49.186: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:49.216: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:49.278: INFO: Lookups using dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8069.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8069.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local jessie_udp@dns-test-service-2.dns-8069.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8069.svc.cluster.local]

Mar 30 21:41:53.924: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:53.956: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:53.987: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:54.018: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:54.112: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:54.142: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:54.174: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:54.204: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:54.267: INFO: Lookups using dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8069.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8069.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local jessie_udp@dns-test-service-2.dns-8069.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8069.svc.cluster.local]

Mar 30 21:41:58.932: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:58.970: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:59.002: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:59.032: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:59.126: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:59.158: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:59.188: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:59.219: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8069.svc.cluster.local from pod dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33: the server could not find the requested resource (get pods dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33)
Mar 30 21:41:59.281: INFO: Lookups using dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8069.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8069.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8069.svc.cluster.local jessie_udp@dns-test-service-2.dns-8069.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8069.svc.cluster.local]

Mar 30 21:42:04.272: INFO: DNS probes using dns-8069/dns-test-ba590d8c-e7d1-48b3-8be3-7d6364c26e33 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 30 21:42:04.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8069" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":283,"completed":69,"skipped":1317,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating secret secrets-5692/secret-test-8da01d4e-6f9c-4baf-8460-486e227acdcb
STEP: Creating a pod to test consume secrets
Mar 30 21:42:04.612: INFO: Waiting up to 5m0s for pod "pod-configmaps-7165b9a7-a0b3-4b98-98d7-3ab859fa7f3e" in namespace "secrets-5692" to be "Succeeded or Failed"
Mar 30 21:42:04.642: INFO: Pod "pod-configmaps-7165b9a7-a0b3-4b98-98d7-3ab859fa7f3e": Phase="Pending", Reason="", readiness=false. Elapsed: 30.429935ms
Mar 30 21:42:06.672: INFO: Pod "pod-configmaps-7165b9a7-a0b3-4b98-98d7-3ab859fa7f3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060187447s
STEP: Saw pod success
Mar 30 21:42:06.672: INFO: Pod "pod-configmaps-7165b9a7-a0b3-4b98-98d7-3ab859fa7f3e" satisfied condition "Succeeded or Failed"
Mar 30 21:42:06.702: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-configmaps-7165b9a7-a0b3-4b98-98d7-3ab859fa7f3e container env-test: <nil>
STEP: delete the pod
Mar 30 21:42:06.775: INFO: Waiting for pod pod-configmaps-7165b9a7-a0b3-4b98-98d7-3ab859fa7f3e to disappear
Mar 30 21:42:06.804: INFO: Pod pod-configmaps-7165b9a7-a0b3-4b98-98d7-3ab859fa7f3e no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Mar 30 21:42:06.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5692" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":283,"completed":70,"skipped":1321,"failed":0}

------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 10 lines ...
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Mar 30 21:42:07.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-468" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":283,"completed":71,"skipped":1321,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-1c5ed703-33e1-4fcf-82f2-a74861b3de02
STEP: Creating a pod to test consume secrets
Mar 30 21:42:07.467: INFO: Waiting up to 5m0s for pod "pod-secrets-efd6f4c0-9ef8-4dc7-8bb8-0fb83e218d7a" in namespace "secrets-4588" to be "Succeeded or Failed"
Mar 30 21:42:07.497: INFO: Pod "pod-secrets-efd6f4c0-9ef8-4dc7-8bb8-0fb83e218d7a": Phase="Pending", Reason="", readiness=false. Elapsed: 29.812432ms
Mar 30 21:42:09.527: INFO: Pod "pod-secrets-efd6f4c0-9ef8-4dc7-8bb8-0fb83e218d7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060137535s
STEP: Saw pod success
Mar 30 21:42:09.527: INFO: Pod "pod-secrets-efd6f4c0-9ef8-4dc7-8bb8-0fb83e218d7a" satisfied condition "Succeeded or Failed"
Mar 30 21:42:09.557: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-secrets-efd6f4c0-9ef8-4dc7-8bb8-0fb83e218d7a container secret-volume-test: <nil>
STEP: delete the pod
Mar 30 21:42:09.641: INFO: Waiting for pod pod-secrets-efd6f4c0-9ef8-4dc7-8bb8-0fb83e218d7a to disappear
Mar 30 21:42:09.669: INFO: Pod pod-secrets-efd6f4c0-9ef8-4dc7-8bb8-0fb83e218d7a no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 30 21:42:09.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4588" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":72,"skipped":1328,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 30 21:42:09.935: INFO: Waiting up to 5m0s for pod "busybox-user-65534-521ad1db-ae2d-477e-b2aa-56d0786687a9" in namespace "security-context-test-1771" to be "Succeeded or Failed"
Mar 30 21:42:09.965: INFO: Pod "busybox-user-65534-521ad1db-ae2d-477e-b2aa-56d0786687a9": Phase="Pending", Reason="", readiness=false. Elapsed: 30.191564ms
Mar 30 21:42:11.994: INFO: Pod "busybox-user-65534-521ad1db-ae2d-477e-b2aa-56d0786687a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059200155s
Mar 30 21:42:11.994: INFO: Pod "busybox-user-65534-521ad1db-ae2d-477e-b2aa-56d0786687a9" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 30 21:42:11.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1771" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":73,"skipped":1337,"failed":0}

------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-52905e8a-e996-4f3f-bb14-71e913d5a394
STEP: Creating a pod to test consume configMaps
Mar 30 21:42:12.283: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-12f6bd4a-84c2-4760-92af-a0db9e60cdeb" in namespace "projected-4888" to be "Succeeded or Failed"
Mar 30 21:42:12.322: INFO: Pod "pod-projected-configmaps-12f6bd4a-84c2-4760-92af-a0db9e60cdeb": Phase="Pending", Reason="", readiness=false. Elapsed: 38.715737ms
Mar 30 21:42:14.354: INFO: Pod "pod-projected-configmaps-12f6bd4a-84c2-4760-92af-a0db9e60cdeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.070566358s
STEP: Saw pod success
Mar 30 21:42:14.354: INFO: Pod "pod-projected-configmaps-12f6bd4a-84c2-4760-92af-a0db9e60cdeb" satisfied condition "Succeeded or Failed"
Mar 30 21:42:14.385: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-projected-configmaps-12f6bd4a-84c2-4760-92af-a0db9e60cdeb container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 21:42:14.455: INFO: Waiting for pod pod-projected-configmaps-12f6bd4a-84c2-4760-92af-a0db9e60cdeb to disappear
Mar 30 21:42:14.484: INFO: Pod pod-projected-configmaps-12f6bd4a-84c2-4760-92af-a0db9e60cdeb no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 30 21:42:14.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4888" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":283,"completed":74,"skipped":1337,"failed":0}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
W0330 21:42:15.557270   26158 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 30 21:42:15.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9279" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":283,"completed":75,"skipped":1340,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
Mar 30 21:42:18.491: INFO: Unable to read jessie_udp@dns-test-service.dns-7698 from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:18.522: INFO: Unable to read jessie_tcp@dns-test-service.dns-7698 from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:18.554: INFO: Unable to read jessie_udp@dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:18.584: INFO: Unable to read jessie_tcp@dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:18.615: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:18.645: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:18.834: INFO: Lookups using dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7698 wheezy_tcp@dns-test-service.dns-7698 wheezy_udp@dns-test-service.dns-7698.svc wheezy_tcp@dns-test-service.dns-7698.svc wheezy_udp@_http._tcp.dns-test-service.dns-7698.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7698.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7698 jessie_tcp@dns-test-service.dns-7698 jessie_udp@dns-test-service.dns-7698.svc jessie_tcp@dns-test-service.dns-7698.svc jessie_udp@_http._tcp.dns-test-service.dns-7698.svc jessie_tcp@_http._tcp.dns-test-service.dns-7698.svc]

Mar 30 21:42:23.869: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:23.900: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:23.936: INFO: Unable to read wheezy_udp@dns-test-service.dns-7698 from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:23.967: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7698 from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:23.998: INFO: Unable to read wheezy_udp@dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
... skipping 5 lines ...
Mar 30 21:42:24.373: INFO: Unable to read jessie_udp@dns-test-service.dns-7698 from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:24.405: INFO: Unable to read jessie_tcp@dns-test-service.dns-7698 from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:24.435: INFO: Unable to read jessie_udp@dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:24.467: INFO: Unable to read jessie_tcp@dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:24.499: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:24.529: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:24.721: INFO: Lookups using dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7698 wheezy_tcp@dns-test-service.dns-7698 wheezy_udp@dns-test-service.dns-7698.svc wheezy_tcp@dns-test-service.dns-7698.svc wheezy_udp@_http._tcp.dns-test-service.dns-7698.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7698.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7698 jessie_tcp@dns-test-service.dns-7698 jessie_udp@dns-test-service.dns-7698.svc jessie_tcp@dns-test-service.dns-7698.svc jessie_udp@_http._tcp.dns-test-service.dns-7698.svc jessie_tcp@_http._tcp.dns-test-service.dns-7698.svc]

Mar 30 21:42:28.866: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:28.901: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:28.950: INFO: Unable to read wheezy_udp@dns-test-service.dns-7698 from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:28.982: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7698 from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:29.013: INFO: Unable to read wheezy_udp@dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
... skipping 5 lines ...
Mar 30 21:42:29.412: INFO: Unable to read jessie_udp@dns-test-service.dns-7698 from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:29.443: INFO: Unable to read jessie_tcp@dns-test-service.dns-7698 from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:29.474: INFO: Unable to read jessie_udp@dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:29.506: INFO: Unable to read jessie_tcp@dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:29.544: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:29.575: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:29.772: INFO: Lookups using dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7698 wheezy_tcp@dns-test-service.dns-7698 wheezy_udp@dns-test-service.dns-7698.svc wheezy_tcp@dns-test-service.dns-7698.svc wheezy_udp@_http._tcp.dns-test-service.dns-7698.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7698.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7698 jessie_tcp@dns-test-service.dns-7698 jessie_udp@dns-test-service.dns-7698.svc jessie_tcp@dns-test-service.dns-7698.svc jessie_udp@_http._tcp.dns-test-service.dns-7698.svc jessie_tcp@_http._tcp.dns-test-service.dns-7698.svc]

Mar 30 21:42:33.865: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:33.896: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:33.927: INFO: Unable to read wheezy_udp@dns-test-service.dns-7698 from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:33.958: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7698 from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:33.990: INFO: Unable to read wheezy_udp@dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
... skipping 5 lines ...
Mar 30 21:42:34.363: INFO: Unable to read jessie_udp@dns-test-service.dns-7698 from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:34.393: INFO: Unable to read jessie_tcp@dns-test-service.dns-7698 from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:34.426: INFO: Unable to read jessie_udp@dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:34.455: INFO: Unable to read jessie_tcp@dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:34.487: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:34.518: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:34.710: INFO: Lookups using dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7698 wheezy_tcp@dns-test-service.dns-7698 wheezy_udp@dns-test-service.dns-7698.svc wheezy_tcp@dns-test-service.dns-7698.svc wheezy_udp@_http._tcp.dns-test-service.dns-7698.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7698.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7698 jessie_tcp@dns-test-service.dns-7698 jessie_udp@dns-test-service.dns-7698.svc jessie_tcp@dns-test-service.dns-7698.svc jessie_udp@_http._tcp.dns-test-service.dns-7698.svc jessie_tcp@_http._tcp.dns-test-service.dns-7698.svc]

Mar 30 21:42:38.865: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:38.898: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:38.932: INFO: Unable to read wheezy_udp@dns-test-service.dns-7698 from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:38.966: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7698 from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:38.997: INFO: Unable to read wheezy_udp@dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
... skipping 5 lines ...
Mar 30 21:42:39.377: INFO: Unable to read jessie_udp@dns-test-service.dns-7698 from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:39.408: INFO: Unable to read jessie_tcp@dns-test-service.dns-7698 from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:39.440: INFO: Unable to read jessie_udp@dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:39.471: INFO: Unable to read jessie_tcp@dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:39.501: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:39.533: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:39.719: INFO: Lookups using dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7698 wheezy_tcp@dns-test-service.dns-7698 wheezy_udp@dns-test-service.dns-7698.svc wheezy_tcp@dns-test-service.dns-7698.svc wheezy_udp@_http._tcp.dns-test-service.dns-7698.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7698.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7698 jessie_tcp@dns-test-service.dns-7698 jessie_udp@dns-test-service.dns-7698.svc jessie_tcp@dns-test-service.dns-7698.svc jessie_udp@_http._tcp.dns-test-service.dns-7698.svc jessie_tcp@_http._tcp.dns-test-service.dns-7698.svc]

Mar 30 21:42:43.866: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:43.897: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:43.931: INFO: Unable to read wheezy_udp@dns-test-service.dns-7698 from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:43.962: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7698 from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:43.996: INFO: Unable to read wheezy_udp@dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
... skipping 5 lines ...
Mar 30 21:42:44.377: INFO: Unable to read jessie_udp@dns-test-service.dns-7698 from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:44.408: INFO: Unable to read jessie_tcp@dns-test-service.dns-7698 from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:44.439: INFO: Unable to read jessie_udp@dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:44.471: INFO: Unable to read jessie_tcp@dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:44.502: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:44.532: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7698.svc from pod dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10: the server could not find the requested resource (get pods dns-test-296d3825-6aba-41ce-a618-d299e261fa10)
Mar 30 21:42:44.727: INFO: Lookups using dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7698 wheezy_tcp@dns-test-service.dns-7698 wheezy_udp@dns-test-service.dns-7698.svc wheezy_tcp@dns-test-service.dns-7698.svc wheezy_udp@_http._tcp.dns-test-service.dns-7698.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7698.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7698 jessie_tcp@dns-test-service.dns-7698 jessie_udp@dns-test-service.dns-7698.svc jessie_tcp@dns-test-service.dns-7698.svc jessie_udp@_http._tcp.dns-test-service.dns-7698.svc jessie_tcp@_http._tcp.dns-test-service.dns-7698.svc]

Mar 30 21:42:49.723: INFO: DNS probes using dns-7698/dns-test-296d3825-6aba-41ce-a618-d299e261fa10 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 30 21:42:49.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7698" for this suite.
•{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":283,"completed":76,"skipped":1351,"failed":0}
SS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 26 lines ...
Mar 30 21:43:40.440: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2224 /api/v1/namespaces/watch-2224/configmaps/e2e-watch-test-configmap-b 679f0e85-c02e-474b-90dc-c0f9743927c8 8518 0 2020-03-30 21:43:30 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 30 21:43:40.440: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2224 /api/v1/namespaces/watch-2224/configmaps/e2e-watch-test-configmap-b 679f0e85-c02e-474b-90dc-c0f9743927c8 8518 0 2020-03-30 21:43:30 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 30 21:43:50.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2224" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":283,"completed":77,"skipped":1353,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 19 lines ...
Mar 30 21:43:53.845: INFO: Deleting pod "var-expansion-84dfef7d-e9fd-4fc0-9590-ac8125ae4f46" in namespace "var-expansion-9069"
Mar 30 21:43:53.879: INFO: Wait up to 5m0s for pod "var-expansion-84dfef7d-e9fd-4fc0-9590-ac8125ae4f46" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 30 21:44:27.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9069" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":283,"completed":78,"skipped":1377,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 27 lines ...
Mar 30 21:44:48.874: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 30 21:44:49.107: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 30 21:44:49.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-843" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":283,"completed":79,"skipped":1394,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
Mar 30 21:44:53.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1996" for this suite.
STEP: Destroying namespace "webhook-1996-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":283,"completed":80,"skipped":1403,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 11 lines ...
Mar 30 21:44:55.015: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 30 21:44:55.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3287" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":283,"completed":81,"skipped":1428,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Mar 30 21:45:19.758: INFO: Restart count of pod container-probe-6418/liveness-13b9ba59-80b3-4fab-9b60-7b1d108669b4 is now 1 (22.363906554s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 30 21:45:19.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6418" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":283,"completed":82,"skipped":1450,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 21:45:19.887: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar 30 21:45:20.049: INFO: Waiting up to 5m0s for pod "pod-33c71fee-a71a-4f8e-b8c8-088fef119e62" in namespace "emptydir-2596" to be "Succeeded or Failed"
Mar 30 21:45:20.083: INFO: Pod "pod-33c71fee-a71a-4f8e-b8c8-088fef119e62": Phase="Pending", Reason="", readiness=false. Elapsed: 34.245389ms
Mar 30 21:45:22.114: INFO: Pod "pod-33c71fee-a71a-4f8e-b8c8-088fef119e62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064854754s
Mar 30 21:45:24.144: INFO: Pod "pod-33c71fee-a71a-4f8e-b8c8-088fef119e62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095063556s
STEP: Saw pod success
Mar 30 21:45:24.144: INFO: Pod "pod-33c71fee-a71a-4f8e-b8c8-088fef119e62" satisfied condition "Succeeded or Failed"
Mar 30 21:45:24.173: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-33c71fee-a71a-4f8e-b8c8-088fef119e62 container test-container: <nil>
STEP: delete the pod
Mar 30 21:45:24.263: INFO: Waiting for pod pod-33c71fee-a71a-4f8e-b8c8-088fef119e62 to disappear
Mar 30 21:45:24.293: INFO: Pod pod-33c71fee-a71a-4f8e-b8c8-088fef119e62 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 21:45:24.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2596" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":83,"skipped":1457,"failed":0}
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 25 lines ...
Mar 30 21:45:40.856: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar 30 21:45:40.885: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 30 21:45:40.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7563" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":283,"completed":84,"skipped":1458,"failed":0}
S
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 13 lines ...
Mar 30 21:45:41.316: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-5690 /api/v1/namespaces/watch-5690/configmaps/e2e-watch-test-resource-version 618116d5-471f-4a41-b2e3-73b4172a2407 9203 0 2020-03-30 21:45:41 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 30 21:45:41.316: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-5690 /api/v1/namespaces/watch-5690/configmaps/e2e-watch-test-resource-version 618116d5-471f-4a41-b2e3-73b4172a2407 9204 0 2020-03-30 21:45:41 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 30 21:45:41.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5690" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":283,"completed":85,"skipped":1459,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 21:45:41.382: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 30 21:45:41.539: INFO: Waiting up to 5m0s for pod "pod-67bd1d2b-0866-446c-8066-2f9f98933e1a" in namespace "emptydir-2583" to be "Succeeded or Failed"
Mar 30 21:45:41.568: INFO: Pod "pod-67bd1d2b-0866-446c-8066-2f9f98933e1a": Phase="Pending", Reason="", readiness=false. Elapsed: 29.619406ms
Mar 30 21:45:43.599: INFO: Pod "pod-67bd1d2b-0866-446c-8066-2f9f98933e1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059814873s
STEP: Saw pod success
Mar 30 21:45:43.599: INFO: Pod "pod-67bd1d2b-0866-446c-8066-2f9f98933e1a" satisfied condition "Succeeded or Failed"
Mar 30 21:45:43.628: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-67bd1d2b-0866-446c-8066-2f9f98933e1a container test-container: <nil>
STEP: delete the pod
Mar 30 21:45:43.706: INFO: Waiting for pod pod-67bd1d2b-0866-446c-8066-2f9f98933e1a to disappear
Mar 30 21:45:43.735: INFO: Pod pod-67bd1d2b-0866-446c-8066-2f9f98933e1a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 21:45:43.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2583" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":86,"skipped":1466,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Mar 30 21:45:48.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6150" for this suite.
STEP: Destroying namespace "webhook-6150-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":283,"completed":87,"skipped":1515,"failed":0}

------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 21:45:49.280: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6537e93c-f156-4da6-befb-49d6efc4624c" in namespace "projected-7748" to be "Succeeded or Failed"
Mar 30 21:45:49.312: INFO: Pod "downwardapi-volume-6537e93c-f156-4da6-befb-49d6efc4624c": Phase="Pending", Reason="", readiness=false. Elapsed: 31.485765ms
Mar 30 21:45:51.342: INFO: Pod "downwardapi-volume-6537e93c-f156-4da6-befb-49d6efc4624c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061890828s
STEP: Saw pod success
Mar 30 21:45:51.342: INFO: Pod "downwardapi-volume-6537e93c-f156-4da6-befb-49d6efc4624c" satisfied condition "Succeeded or Failed"
Mar 30 21:45:51.372: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod downwardapi-volume-6537e93c-f156-4da6-befb-49d6efc4624c container client-container: <nil>
STEP: delete the pod
Mar 30 21:45:51.467: INFO: Waiting for pod downwardapi-volume-6537e93c-f156-4da6-befb-49d6efc4624c to disappear
Mar 30 21:45:51.496: INFO: Pod downwardapi-volume-6537e93c-f156-4da6-befb-49d6efc4624c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 30 21:45:51.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7748" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":283,"completed":88,"skipped":1515,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 9 lines ...
STEP: Updating configmap projected-configmap-test-upd-8f09f305-98da-443f-85bf-f8c8b4deafeb
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 30 21:45:58.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8054" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":89,"skipped":1528,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 30 21:46:14.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3829" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":283,"completed":90,"skipped":1534,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
... skipping 18 lines ...
STEP: Deleting second CR
Mar 30 21:47:05.492: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-30T21:46:25Z generation:2 name:name2 resourceVersion:9685 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:cd075bff-3b23-4c19-b6a9-32ef577e54a6] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 21:47:15.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-7558" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":283,"completed":91,"skipped":1543,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 11 lines ...
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 30 21:47:15.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7048" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":283,"completed":92,"skipped":1555,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-c06533b5-45a2-4a40-ba3c-6ffd71cfe6be
STEP: Creating a pod to test consume configMaps
Mar 30 21:47:16.252: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-05e58acb-0ee5-480c-8d3a-bb7a8542ac58" in namespace "projected-9897" to be "Succeeded or Failed"
Mar 30 21:47:16.287: INFO: Pod "pod-projected-configmaps-05e58acb-0ee5-480c-8d3a-bb7a8542ac58": Phase="Pending", Reason="", readiness=false. Elapsed: 34.564059ms
Mar 30 21:47:18.317: INFO: Pod "pod-projected-configmaps-05e58acb-0ee5-480c-8d3a-bb7a8542ac58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064446297s
STEP: Saw pod success
Mar 30 21:47:18.317: INFO: Pod "pod-projected-configmaps-05e58acb-0ee5-480c-8d3a-bb7a8542ac58" satisfied condition "Succeeded or Failed"
Mar 30 21:47:18.347: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-projected-configmaps-05e58acb-0ee5-480c-8d3a-bb7a8542ac58 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 21:47:18.424: INFO: Waiting for pod pod-projected-configmaps-05e58acb-0ee5-480c-8d3a-bb7a8542ac58 to disappear
Mar 30 21:47:18.456: INFO: Pod pod-projected-configmaps-05e58acb-0ee5-480c-8d3a-bb7a8542ac58 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 30 21:47:18.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9897" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":93,"skipped":1564,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 21:47:18.547: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar 30 21:47:18.702: INFO: Waiting up to 5m0s for pod "pod-b6dc2c69-6b85-4c4a-99eb-2de39ece730f" in namespace "emptydir-2412" to be "Succeeded or Failed"
Mar 30 21:47:18.732: INFO: Pod "pod-b6dc2c69-6b85-4c4a-99eb-2de39ece730f": Phase="Pending", Reason="", readiness=false. Elapsed: 29.840223ms
Mar 30 21:47:20.763: INFO: Pod "pod-b6dc2c69-6b85-4c4a-99eb-2de39ece730f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060305896s
STEP: Saw pod success
Mar 30 21:47:20.763: INFO: Pod "pod-b6dc2c69-6b85-4c4a-99eb-2de39ece730f" satisfied condition "Succeeded or Failed"
Mar 30 21:47:20.793: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-b6dc2c69-6b85-4c4a-99eb-2de39ece730f container test-container: <nil>
STEP: delete the pod
Mar 30 21:47:20.876: INFO: Waiting for pod pod-b6dc2c69-6b85-4c4a-99eb-2de39ece730f to disappear
Mar 30 21:47:20.905: INFO: Pod pod-b6dc2c69-6b85-4c4a-99eb-2de39ece730f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 21:47:20.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2412" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":94,"skipped":1575,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 17 lines ...
Mar 30 21:47:29.512: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar 30 21:47:29.541: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 30 21:47:29.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5932" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":283,"completed":95,"skipped":1591,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should print the output to logs [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 30 21:47:31.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6788" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":283,"completed":96,"skipped":1633,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 28 lines ...
Mar 30 21:48:47.040: INFO: Terminating ReplicationController wrapped-volume-race-1def6840-f4fa-44cd-91ae-fe8497aac5ba pods took: 400.321851ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:175
Mar 30 21:48:54.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7918" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":283,"completed":97,"skipped":1645,"failed":0}
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 28 lines ...
Mar 30 21:49:17.194: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 30 21:49:17.417: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 30 21:49:17.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3206" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":283,"completed":98,"skipped":1649,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-downwardapi-pqrv
STEP: Creating a pod to test atomic-volume-subpath
Mar 30 21:49:17.721: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-pqrv" in namespace "subpath-6730" to be "Succeeded or Failed"
Mar 30 21:49:17.753: INFO: Pod "pod-subpath-test-downwardapi-pqrv": Phase="Pending", Reason="", readiness=false. Elapsed: 31.890927ms
Mar 30 21:49:19.783: INFO: Pod "pod-subpath-test-downwardapi-pqrv": Phase="Running", Reason="", readiness=true. Elapsed: 2.062447156s
Mar 30 21:49:21.814: INFO: Pod "pod-subpath-test-downwardapi-pqrv": Phase="Running", Reason="", readiness=true. Elapsed: 4.092583628s
Mar 30 21:49:23.844: INFO: Pod "pod-subpath-test-downwardapi-pqrv": Phase="Running", Reason="", readiness=true. Elapsed: 6.122690858s
Mar 30 21:49:25.874: INFO: Pod "pod-subpath-test-downwardapi-pqrv": Phase="Running", Reason="", readiness=true. Elapsed: 8.153047626s
Mar 30 21:49:27.904: INFO: Pod "pod-subpath-test-downwardapi-pqrv": Phase="Running", Reason="", readiness=true. Elapsed: 10.1828754s
Mar 30 21:49:29.935: INFO: Pod "pod-subpath-test-downwardapi-pqrv": Phase="Running", Reason="", readiness=true. Elapsed: 12.21368418s
Mar 30 21:49:31.965: INFO: Pod "pod-subpath-test-downwardapi-pqrv": Phase="Running", Reason="", readiness=true. Elapsed: 14.243583848s
Mar 30 21:49:33.994: INFO: Pod "pod-subpath-test-downwardapi-pqrv": Phase="Running", Reason="", readiness=true. Elapsed: 16.273164544s
Mar 30 21:49:36.024: INFO: Pod "pod-subpath-test-downwardapi-pqrv": Phase="Running", Reason="", readiness=true. Elapsed: 18.303266854s
Mar 30 21:49:38.054: INFO: Pod "pod-subpath-test-downwardapi-pqrv": Phase="Running", Reason="", readiness=true. Elapsed: 20.332973927s
Mar 30 21:49:40.084: INFO: Pod "pod-subpath-test-downwardapi-pqrv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.362730109s
STEP: Saw pod success
Mar 30 21:49:40.084: INFO: Pod "pod-subpath-test-downwardapi-pqrv" satisfied condition "Succeeded or Failed"
Mar 30 21:49:40.114: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-subpath-test-downwardapi-pqrv container test-container-subpath-downwardapi-pqrv: <nil>
STEP: delete the pod
Mar 30 21:49:40.196: INFO: Waiting for pod pod-subpath-test-downwardapi-pqrv to disappear
Mar 30 21:49:40.225: INFO: Pod pod-subpath-test-downwardapi-pqrv no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-pqrv
Mar 30 21:49:40.225: INFO: Deleting pod "pod-subpath-test-downwardapi-pqrv" in namespace "subpath-6730"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 30 21:49:40.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6730" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":283,"completed":99,"skipped":1671,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
Mar 30 21:49:43.221: INFO: Successfully updated pod "annotationupdatecc2ba5e6-408f-4259-a2ef-a5d0e20a8374"
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 30 21:49:47.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5688" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":283,"completed":100,"skipped":1703,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  test/e2e/framework/framework.go:175
Mar 30 21:49:51.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5739" for this suite.
STEP: Destroying namespace "webhook-5739-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":283,"completed":101,"skipped":1742,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 30 21:49:51.704: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
Mar 30 21:49:51.834: INFO: PodSpec: initContainers in spec.initContainers
Mar 30 21:50:36.033: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-6f7776c0-434b-4e10-89b4-b97edca29d1d", GenerateName:"", Namespace:"init-container-1688", SelfLink:"/api/v1/namespaces/init-container-1688/pods/pod-init-6f7776c0-434b-4e10-89b4-b97edca29d1d", UID:"15e43d6e-109e-43d1-bbe5-ebf06224134c", ResourceVersion:"11540", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721201791, loc:(*time.Location)(0x7b57f20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"834027220"}, Annotations:map[string]string{"cni.projectcalico.org/podIP":"192.168.32.26/32", "cni.projectcalico.org/podIPs":"192.168.32.26/32"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nf5bf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001b14200), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nf5bf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nf5bf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nf5bf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003593208), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"test1-md-0-m7pwl.c.kubernetes-es-logging.internal", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002177420), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003593280)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0035932a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0035932a8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0035932ac), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201791, loc:(*time.Location)(0x7b57f20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201791, loc:(*time.Location)(0x7b57f20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201791, loc:(*time.Location)(0x7b57f20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201791, loc:(*time.Location)(0x7b57f20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.150.0.4", PodIP:"192.168.32.26", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.32.26"}}, StartTime:(*v1.Time)(0xc0013e8560), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002177500)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002177570)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://d03be79d17b0e52924a115413aea76aa16863d9269704bb35a71ec4dcaf010b5", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0013e85a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0013e8580), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00359332f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 30 21:50:36.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1688" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":283,"completed":102,"skipped":1775,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Mar 30 21:50:38.377: INFO: Initial restart count of pod busybox-7f9c9ca2-34e1-4bf4-8c04-b22406506274 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 30 21:54:40.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5867" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":283,"completed":103,"skipped":1797,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 30 21:54:44.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8016" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":283,"completed":104,"skipped":1816,"failed":0}
SSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-20edc509-4f5a-40f0-9181-a4a1076f41e5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 30 21:54:49.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3742" for this suite.
•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":105,"skipped":1822,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 21:54:49.747: INFO: Waiting up to 5m0s for pod "downwardapi-volume-48691c7d-5bfe-4cd9-b489-c988f41dc69c" in namespace "projected-8700" to be "Succeeded or Failed"
Mar 30 21:54:49.786: INFO: Pod "downwardapi-volume-48691c7d-5bfe-4cd9-b489-c988f41dc69c": Phase="Pending", Reason="", readiness=false. Elapsed: 39.010487ms
Mar 30 21:54:51.817: INFO: Pod "downwardapi-volume-48691c7d-5bfe-4cd9-b489-c988f41dc69c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.069583637s
STEP: Saw pod success
Mar 30 21:54:51.817: INFO: Pod "downwardapi-volume-48691c7d-5bfe-4cd9-b489-c988f41dc69c" satisfied condition "Succeeded or Failed"
Mar 30 21:54:51.848: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod downwardapi-volume-48691c7d-5bfe-4cd9-b489-c988f41dc69c container client-container: <nil>
STEP: delete the pod
Mar 30 21:54:51.935: INFO: Waiting for pod downwardapi-volume-48691c7d-5bfe-4cd9-b489-c988f41dc69c to disappear
Mar 30 21:54:51.968: INFO: Pod downwardapi-volume-48691c7d-5bfe-4cd9-b489-c988f41dc69c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 30 21:54:51.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8700" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":283,"completed":106,"skipped":1825,"failed":0}
SS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
... skipping 7 lines ...
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  test/e2e/framework/framework.go:175
Mar 30 21:54:52.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-9663" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":283,"completed":107,"skipped":1827,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 16 lines ...
  test/e2e/framework/framework.go:175
Mar 30 21:55:05.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7500" for this suite.
STEP: Destroying namespace "nsdeletetest-163" for this suite.
Mar 30 21:55:05.985: INFO: Namespace nsdeletetest-163 was already deleted
STEP: Destroying namespace "nsdeletetest-7451" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":283,"completed":108,"skipped":1832,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-c53b3077-357a-4fe7-9d27-1733aaf77e53
STEP: Creating a pod to test consume configMaps
Mar 30 21:55:06.208: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0df64847-a17e-42d4-acdd-236e4e1b464c" in namespace "projected-168" to be "Succeeded or Failed"
Mar 30 21:55:06.238: INFO: Pod "pod-projected-configmaps-0df64847-a17e-42d4-acdd-236e4e1b464c": Phase="Pending", Reason="", readiness=false. Elapsed: 29.804016ms
Mar 30 21:55:08.268: INFO: Pod "pod-projected-configmaps-0df64847-a17e-42d4-acdd-236e4e1b464c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060352626s
STEP: Saw pod success
Mar 30 21:55:08.268: INFO: Pod "pod-projected-configmaps-0df64847-a17e-42d4-acdd-236e4e1b464c" satisfied condition "Succeeded or Failed"
Mar 30 21:55:08.297: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-projected-configmaps-0df64847-a17e-42d4-acdd-236e4e1b464c container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 21:55:08.369: INFO: Waiting for pod pod-projected-configmaps-0df64847-a17e-42d4-acdd-236e4e1b464c to disappear
Mar 30 21:55:08.400: INFO: Pod pod-projected-configmaps-0df64847-a17e-42d4-acdd-236e4e1b464c no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 30 21:55:08.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-168" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":109,"skipped":1839,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...
Mar 30 21:55:10.928: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 30 21:55:11.192: INFO: Deleting pod dns-3412...
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 30 21:55:11.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3412" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":283,"completed":110,"skipped":1850,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 16 lines ...
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 30 21:55:20.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8238" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":283,"completed":111,"skipped":1864,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 30 21:56:01.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0330 21:56:01.119898   26158 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
STEP: Destroying namespace "gc-4381" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":283,"completed":112,"skipped":1900,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Service endpoints latency
... skipping 417 lines ...
Mar 30 21:56:12.175: INFO: 99 %ile: 894.567709ms
Mar 30 21:56:12.175: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  test/e2e/framework/framework.go:175
Mar 30 21:56:12.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-9012" for this suite.
•{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":283,"completed":113,"skipped":1947,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 9 lines ...
STEP: creating pod
Mar 30 21:56:14.537: INFO: Pod pod-hostip-edcd8104-bd7e-4235-bd66-a0618c7138ed has hostIP: 10.150.0.4
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 30 21:56:14.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7665" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":283,"completed":114,"skipped":1971,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 30 21:56:26.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9570" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":283,"completed":115,"skipped":1984,"failed":0}
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 122 lines ...
Mar 30 21:57:02.369: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9390/pods","resourceVersion":"14465"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 30 21:57:02.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9390" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":283,"completed":116,"skipped":1985,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Mar 30 21:57:02.850: INFO: stderr: ""
Mar 30 21:57:02.850: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.1.129+933c30359291bf\", GitCommit:\"933c30359291bf2f4adf01d3359deafaf61c143d\", GitTreeState:\"clean\", BuildDate:\"2020-03-27T23:23:53Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-03-12T20:55:23Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 21:57:02.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8142" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":283,"completed":117,"skipped":2003,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 30 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 30 21:57:07.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5447" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":283,"completed":118,"skipped":2018,"failed":0}

------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 30 21:57:07.773: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 30 21:57:07.952: INFO: Waiting up to 5m0s for pod "downward-api-adc65f6b-f47b-41f4-ba4e-4e6ee275b32a" in namespace "downward-api-4501" to be "Succeeded or Failed"
Mar 30 21:57:07.988: INFO: Pod "downward-api-adc65f6b-f47b-41f4-ba4e-4e6ee275b32a": Phase="Pending", Reason="", readiness=false. Elapsed: 36.053569ms
Mar 30 21:57:10.018: INFO: Pod "downward-api-adc65f6b-f47b-41f4-ba4e-4e6ee275b32a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065879281s
STEP: Saw pod success
Mar 30 21:57:10.018: INFO: Pod "downward-api-adc65f6b-f47b-41f4-ba4e-4e6ee275b32a" satisfied condition "Succeeded or Failed"
Mar 30 21:57:10.048: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod downward-api-adc65f6b-f47b-41f4-ba4e-4e6ee275b32a container dapi-container: <nil>
STEP: delete the pod
Mar 30 21:57:10.119: INFO: Waiting for pod downward-api-adc65f6b-f47b-41f4-ba4e-4e6ee275b32a to disappear
Mar 30 21:57:10.149: INFO: Pod downward-api-adc65f6b-f47b-41f4-ba4e-4e6ee275b32a no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 30 21:57:10.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4501" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":283,"completed":119,"skipped":2018,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-a266d505-7573-482b-a2b7-7fca2424c4b9
STEP: Creating a pod to test consume secrets
Mar 30 21:57:10.449: INFO: Waiting up to 5m0s for pod "pod-secrets-100050ba-e35b-4624-acdb-720f2a3a66ba" in namespace "secrets-5154" to be "Succeeded or Failed"
Mar 30 21:57:10.480: INFO: Pod "pod-secrets-100050ba-e35b-4624-acdb-720f2a3a66ba": Phase="Pending", Reason="", readiness=false. Elapsed: 30.235553ms
Mar 30 21:57:12.510: INFO: Pod "pod-secrets-100050ba-e35b-4624-acdb-720f2a3a66ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06035789s
STEP: Saw pod success
Mar 30 21:57:12.510: INFO: Pod "pod-secrets-100050ba-e35b-4624-acdb-720f2a3a66ba" satisfied condition "Succeeded or Failed"
Mar 30 21:57:12.540: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-secrets-100050ba-e35b-4624-acdb-720f2a3a66ba container secret-volume-test: <nil>
STEP: delete the pod
Mar 30 21:57:12.611: INFO: Waiting for pod pod-secrets-100050ba-e35b-4624-acdb-720f2a3a66ba to disappear
Mar 30 21:57:12.641: INFO: Pod pod-secrets-100050ba-e35b-4624-acdb-720f2a3a66ba no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 30 21:57:12.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5154" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":120,"skipped":2025,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 8 lines ...
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:175
Mar 30 21:57:15.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-724" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":283,"completed":121,"skipped":2049,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 27 lines ...
Mar 30 21:57:37.233: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 30 21:57:38.455: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 30 21:57:38.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-255" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":122,"skipped":2058,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 9 lines ...
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  test/e2e/framework/framework.go:175
Mar 30 21:57:38.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-991" for this suite.
STEP: Destroying namespace "nspatchtest-80e7c1a4-0c5d-42a6-bcab-7341f519548c-9060" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":283,"completed":123,"skipped":2062,"failed":0}
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 2 lines ...
Mar 30 21:57:38.952: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Mar 30 21:57:41.235: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 30 21:57:41.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1311" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":283,"completed":124,"skipped":2066,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Events
... skipping 16 lines ...
Mar 30 21:57:47.730: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  test/e2e/framework/framework.go:175
Mar 30 21:57:47.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1100" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":283,"completed":125,"skipped":2095,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 21:58:07.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2488" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":283,"completed":126,"skipped":2156,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-9a7c8651-f6e8-4fa5-9726-bc0cb0b0d200
STEP: Creating a pod to test consume configMaps
Mar 30 21:58:07.281: INFO: Waiting up to 5m0s for pod "pod-configmaps-382302e2-36ca-437a-8a59-abc2b70dc568" in namespace "configmap-5190" to be "Succeeded or Failed"
Mar 30 21:58:07.309: INFO: Pod "pod-configmaps-382302e2-36ca-437a-8a59-abc2b70dc568": Phase="Pending", Reason="", readiness=false. Elapsed: 28.349386ms
Mar 30 21:58:09.339: INFO: Pod "pod-configmaps-382302e2-36ca-437a-8a59-abc2b70dc568": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058211382s
STEP: Saw pod success
Mar 30 21:58:09.339: INFO: Pod "pod-configmaps-382302e2-36ca-437a-8a59-abc2b70dc568" satisfied condition "Succeeded or Failed"
Mar 30 21:58:09.369: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-configmaps-382302e2-36ca-437a-8a59-abc2b70dc568 container configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 21:58:09.443: INFO: Waiting for pod pod-configmaps-382302e2-36ca-437a-8a59-abc2b70dc568 to disappear
Mar 30 21:58:09.473: INFO: Pod pod-configmaps-382302e2-36ca-437a-8a59-abc2b70dc568 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 21:58:09.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5190" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":127,"skipped":2167,"failed":0}

------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 30 21:58:09.681: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 21:58:12.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-495" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":283,"completed":128,"skipped":2167,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 17 lines ...
Mar 30 21:58:13.429: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-5696 /api/v1/namespaces/watch-5696/configmaps/e2e-watch-test-watch-closed d2e188e0-4c9e-4b96-acb4-f147a00b54af 15088 0 2020-03-30 21:58:13 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 30 21:58:13.429: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-5696 /api/v1/namespaces/watch-5696/configmaps/e2e-watch-test-watch-closed d2e188e0-4c9e-4b96-acb4-f147a00b54af 15091 0 2020-03-30 21:58:13 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 30 21:58:13.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5696" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":283,"completed":129,"skipped":2200,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 26 lines ...
Mar 30 21:58:18.136: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-84n4b" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-84n4b test-rolling-update-deployment-664dd8fc7f- deployment-2001 /api/v1/namespaces/deployment-2001/pods/test-rolling-update-deployment-664dd8fc7f-84n4b 1fddcc6b-f222-4eaf-9cfc-63709ef81e6b 15153 0 2020-03-30 21:58:15 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:664dd8fc7f] map[cni.projectcalico.org/podIP:192.168.32.43/32 cni.projectcalico.org/podIPs:192.168.32.43/32] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f 5480f506-c933-4c4a-9997-1846b3a53abc 0xc002e23407 0xc002e23408}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gqxjx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gqxjx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gqxjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-m7pwl.c.kubernetes-es-logging.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:58:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:58:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:58:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:58:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:192.168.32.43,StartTime:2020-03-30 21:58:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-30 21:58:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://cd973b79ab0b711f75f2d067885a114afcf272e381b0dd441e96292d7c5853df,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.32.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 30 21:58:18.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2001" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":283,"completed":130,"skipped":2228,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 47 lines ...
Mar 30 21:58:22.410: INFO: stderr: ""
Mar 30 21:58:22.410: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 21:58:22.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8189" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":283,"completed":131,"skipped":2232,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 30 21:58:22.499: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 30 21:58:22.661: INFO: Waiting up to 5m0s for pod "downward-api-f40a4a73-ecb3-4d23-9efd-ad9aaff29277" in namespace "downward-api-6794" to be "Succeeded or Failed"
Mar 30 21:58:22.725: INFO: Pod "downward-api-f40a4a73-ecb3-4d23-9efd-ad9aaff29277": Phase="Pending", Reason="", readiness=false. Elapsed: 63.724423ms
Mar 30 21:58:24.756: INFO: Pod "downward-api-f40a4a73-ecb3-4d23-9efd-ad9aaff29277": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.094604766s
STEP: Saw pod success
Mar 30 21:58:24.756: INFO: Pod "downward-api-f40a4a73-ecb3-4d23-9efd-ad9aaff29277" satisfied condition "Succeeded or Failed"
Mar 30 21:58:24.785: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod downward-api-f40a4a73-ecb3-4d23-9efd-ad9aaff29277 container dapi-container: <nil>
STEP: delete the pod
Mar 30 21:58:24.858: INFO: Waiting for pod downward-api-f40a4a73-ecb3-4d23-9efd-ad9aaff29277 to disappear
Mar 30 21:58:24.887: INFO: Pod downward-api-f40a4a73-ecb3-4d23-9efd-ad9aaff29277 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 30 21:58:24.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6794" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":283,"completed":132,"skipped":2282,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 30 21:58:25.147: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-2f48a16a-0719-4345-ba0e-85179f1a7df7" in namespace "security-context-test-6748" to be "Succeeded or Failed"
Mar 30 21:58:25.180: INFO: Pod "busybox-privileged-false-2f48a16a-0719-4345-ba0e-85179f1a7df7": Phase="Pending", Reason="", readiness=false. Elapsed: 32.707939ms
Mar 30 21:58:27.211: INFO: Pod "busybox-privileged-false-2f48a16a-0719-4345-ba0e-85179f1a7df7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063477905s
Mar 30 21:58:27.211: INFO: Pod "busybox-privileged-false-2f48a16a-0719-4345-ba0e-85179f1a7df7" satisfied condition "Succeeded or Failed"
Mar 30 21:58:27.244: INFO: Got logs for pod "busybox-privileged-false-2f48a16a-0719-4345-ba0e-85179f1a7df7": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 30 21:58:27.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6748" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":133,"skipped":2295,"failed":0}

------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 9 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 30 21:58:27.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9802" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":283,"completed":134,"skipped":2295,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
Mar 30 21:58:32.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5535" for this suite.
STEP: Destroying namespace "webhook-5535-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":283,"completed":135,"skipped":2296,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 25 lines ...
Mar 30 21:58:35.005: INFO: Pod "test-recreate-deployment-5f94c574ff-2ppgj" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-2ppgj test-recreate-deployment-5f94c574ff- deployment-6721 /api/v1/namespaces/deployment-6721/pods/test-recreate-deployment-5f94c574ff-2ppgj 0ed7e7af-8605-438d-9b16-8382895ba1ce 15472 0 2020-03-30 21:58:34 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff db0f6086-a94a-4710-b8ec-5ba903b32946 0xc00353ae77 0xc00353ae78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q68hz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q68hz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q68hz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-nfkzj.c.kubernetes-es-logging.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:58:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:58:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:,StartTime:2020-03-30 21:58:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 30 21:58:35.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6721" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":283,"completed":136,"skipped":2310,"failed":0}
SSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 30 21:58:37.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5579" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":137,"skipped":2313,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 21:58:37.496: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 30 21:58:37.656: INFO: Waiting up to 5m0s for pod "pod-69aedc56-56b0-4389-ad80-4871d91595cd" in namespace "emptydir-5975" to be "Succeeded or Failed"
Mar 30 21:58:37.687: INFO: Pod "pod-69aedc56-56b0-4389-ad80-4871d91595cd": Phase="Pending", Reason="", readiness=false. Elapsed: 30.448191ms
Mar 30 21:58:39.717: INFO: Pod "pod-69aedc56-56b0-4389-ad80-4871d91595cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060337559s
STEP: Saw pod success
Mar 30 21:58:39.717: INFO: Pod "pod-69aedc56-56b0-4389-ad80-4871d91595cd" satisfied condition "Succeeded or Failed"
Mar 30 21:58:39.747: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-69aedc56-56b0-4389-ad80-4871d91595cd container test-container: <nil>
STEP: delete the pod
Mar 30 21:58:39.818: INFO: Waiting for pod pod-69aedc56-56b0-4389-ad80-4871d91595cd to disappear
Mar 30 21:58:39.848: INFO: Pod pod-69aedc56-56b0-4389-ad80-4871d91595cd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 21:58:39.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5975" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":138,"skipped":2338,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 7 lines ...
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 30 21:59:40.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2976" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":283,"completed":139,"skipped":2379,"failed":0}
S
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 12 lines ...
STEP: reading a file in the container
Mar 30 21:59:43.838: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl exec --namespace=svcaccounts-5793 pod-service-account-51c59bf2-9685-4e63-a826-b854ab62596d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
Mar 30 21:59:44.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5793" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":283,"completed":140,"skipped":2380,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
Mar 30 21:59:49.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4183" for this suite.
STEP: Destroying namespace "webhook-4183-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":283,"completed":141,"skipped":2381,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 21:59:49.829: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1d033d23-4ee3-4079-9c49-2b1ae11f528e" in namespace "downward-api-1177" to be "Succeeded or Failed"
Mar 30 21:59:49.862: INFO: Pod "downwardapi-volume-1d033d23-4ee3-4079-9c49-2b1ae11f528e": Phase="Pending", Reason="", readiness=false. Elapsed: 33.153003ms
Mar 30 21:59:51.892: INFO: Pod "downwardapi-volume-1d033d23-4ee3-4079-9c49-2b1ae11f528e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063563682s
STEP: Saw pod success
Mar 30 21:59:51.892: INFO: Pod "downwardapi-volume-1d033d23-4ee3-4079-9c49-2b1ae11f528e" satisfied condition "Succeeded or Failed"
Mar 30 21:59:51.924: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod downwardapi-volume-1d033d23-4ee3-4079-9c49-2b1ae11f528e container client-container: <nil>
STEP: delete the pod
Mar 30 21:59:52.006: INFO: Waiting for pod downwardapi-volume-1d033d23-4ee3-4079-9c49-2b1ae11f528e to disappear
Mar 30 21:59:52.036: INFO: Pod downwardapi-volume-1d033d23-4ee3-4079-9c49-2b1ae11f528e no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 30 21:59:52.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1177" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":283,"completed":142,"skipped":2428,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-23e7560b-332e-4464-9fe0-8fce3abe70b0
STEP: Creating a pod to test consume secrets
Mar 30 21:59:52.360: INFO: Waiting up to 5m0s for pod "pod-secrets-927796e5-025a-43ae-a9ac-778ad324df3a" in namespace "secrets-2414" to be "Succeeded or Failed"
Mar 30 21:59:52.391: INFO: Pod "pod-secrets-927796e5-025a-43ae-a9ac-778ad324df3a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.644374ms
Mar 30 21:59:54.421: INFO: Pod "pod-secrets-927796e5-025a-43ae-a9ac-778ad324df3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061030284s
STEP: Saw pod success
Mar 30 21:59:54.421: INFO: Pod "pod-secrets-927796e5-025a-43ae-a9ac-778ad324df3a" satisfied condition "Succeeded or Failed"
Mar 30 21:59:54.452: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-secrets-927796e5-025a-43ae-a9ac-778ad324df3a container secret-env-test: <nil>
STEP: delete the pod
Mar 30 21:59:54.532: INFO: Waiting for pod pod-secrets-927796e5-025a-43ae-a9ac-778ad324df3a to disappear
Mar 30 21:59:54.563: INFO: Pod pod-secrets-927796e5-025a-43ae-a9ac-778ad324df3a no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Mar 30 21:59:54.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2414" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":283,"completed":143,"skipped":2432,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 5 lines ...
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 30 21:59:56.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4603" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":283,"completed":144,"skipped":2508,"failed":0}
SS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 30 21:59:56.999: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 30 21:59:57.155: INFO: Waiting up to 5m0s for pod "downward-api-8855a77a-b172-4310-aede-279cb9a6b730" in namespace "downward-api-8933" to be "Succeeded or Failed"
Mar 30 21:59:57.192: INFO: Pod "downward-api-8855a77a-b172-4310-aede-279cb9a6b730": Phase="Pending", Reason="", readiness=false. Elapsed: 37.090974ms
Mar 30 21:59:59.222: INFO: Pod "downward-api-8855a77a-b172-4310-aede-279cb9a6b730": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.067101725s
STEP: Saw pod success
Mar 30 21:59:59.222: INFO: Pod "downward-api-8855a77a-b172-4310-aede-279cb9a6b730" satisfied condition "Succeeded or Failed"
Mar 30 21:59:59.252: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod downward-api-8855a77a-b172-4310-aede-279cb9a6b730 container dapi-container: <nil>
STEP: delete the pod
Mar 30 21:59:59.334: INFO: Waiting for pod downward-api-8855a77a-b172-4310-aede-279cb9a6b730 to disappear
Mar 30 21:59:59.363: INFO: Pod downward-api-8855a77a-b172-4310-aede-279cb9a6b730 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 30 21:59:59.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8933" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":283,"completed":145,"skipped":2510,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 30 21:59:59.453: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap that has name configmap-test-emptyKey-56536e34-f2cf-43b5-ae56-0039ba4ec15d
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 21:59:59.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7392" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":283,"completed":146,"skipped":2527,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 22 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 30 22:00:06.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5885" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":283,"completed":147,"skipped":2541,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Updating configmap configmap-test-upd-1e0b5e94-af95-4811-9c35-07fa4262e1b1
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 22:01:20.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3549" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":148,"skipped":2544,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 22:01:20.873: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a55ab5a-d343-4d36-ab31-16217fce9128" in namespace "downward-api-4298" to be "Succeeded or Failed"
Mar 30 22:01:20.907: INFO: Pod "downwardapi-volume-0a55ab5a-d343-4d36-ab31-16217fce9128": Phase="Pending", Reason="", readiness=false. Elapsed: 33.704847ms
Mar 30 22:01:22.938: INFO: Pod "downwardapi-volume-0a55ab5a-d343-4d36-ab31-16217fce9128": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065448816s
STEP: Saw pod success
Mar 30 22:01:22.938: INFO: Pod "downwardapi-volume-0a55ab5a-d343-4d36-ab31-16217fce9128" satisfied condition "Succeeded or Failed"
Mar 30 22:01:22.968: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod downwardapi-volume-0a55ab5a-d343-4d36-ab31-16217fce9128 container client-container: <nil>
STEP: delete the pod
Mar 30 22:01:23.042: INFO: Waiting for pod downwardapi-volume-0a55ab5a-d343-4d36-ab31-16217fce9128 to disappear
Mar 30 22:01:23.071: INFO: Pod downwardapi-volume-0a55ab5a-d343-4d36-ab31-16217fce9128 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 30 22:01:23.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4298" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":149,"skipped":2560,"failed":0}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 59 lines ...
Mar 30 22:03:06.321: INFO: Waiting for statefulset status.replicas updated to 0
Mar 30 22:03:06.361: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 30 22:03:06.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2736" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":283,"completed":150,"skipped":2564,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Mar 30 22:03:06.879: INFO: stderr: ""
Mar 30 22:03:06.879: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncrd.projectcalico.org/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 22:03:06.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8367" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":283,"completed":151,"skipped":2569,"failed":0}

------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 39 lines ...
Mar 30 22:03:14.057: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig explain e2e-test-crd-publish-openapi-5348-crds.spec'
Mar 30 22:03:14.331: INFO: stderr: ""
Mar 30 22:03:14.331: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5348-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Mar 30 22:03:14.331: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig explain e2e-test-crd-publish-openapi-5348-crds.spec.bars'
Mar 30 22:03:14.617: INFO: stderr: ""
Mar 30 22:03:14.617: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5348-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Mar 30 22:03:14.618: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig explain e2e-test-crd-publish-openapi-5348-crds.spec.bars2'
Mar 30 22:03:14.895: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 22:03:17.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3723" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":283,"completed":152,"skipped":2569,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 26 lines ...
Mar 30 22:03:20.625: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 30 22:03:20.625: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 22:03:20.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9962" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":283,"completed":153,"skipped":2576,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 22:03:20.717: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 30 22:03:20.873: INFO: Waiting up to 5m0s for pod "pod-2bdd976a-15e2-4e66-b2e8-c3d8b89962da" in namespace "emptydir-82" to be "Succeeded or Failed"
Mar 30 22:03:20.901: INFO: Pod "pod-2bdd976a-15e2-4e66-b2e8-c3d8b89962da": Phase="Pending", Reason="", readiness=false. Elapsed: 28.695968ms
Mar 30 22:03:22.931: INFO: Pod "pod-2bdd976a-15e2-4e66-b2e8-c3d8b89962da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058647786s
STEP: Saw pod success
Mar 30 22:03:22.931: INFO: Pod "pod-2bdd976a-15e2-4e66-b2e8-c3d8b89962da" satisfied condition "Succeeded or Failed"
Mar 30 22:03:22.961: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-2bdd976a-15e2-4e66-b2e8-c3d8b89962da container test-container: <nil>
STEP: delete the pod
Mar 30 22:03:23.051: INFO: Waiting for pod pod-2bdd976a-15e2-4e66-b2e8-c3d8b89962da to disappear
Mar 30 22:03:23.079: INFO: Pod pod-2bdd976a-15e2-4e66-b2e8-c3d8b89962da no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 22:03:23.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-82" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":154,"skipped":2641,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
Mar 30 22:03:29.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7715" for this suite.
STEP: Destroying namespace "webhook-7715-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":283,"completed":155,"skipped":2659,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 22:03:29.521: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9645c256-3706-4077-8dc9-5fe77dbb6dc3" in namespace "projected-6879" to be "Succeeded or Failed"
Mar 30 22:03:29.553: INFO: Pod "downwardapi-volume-9645c256-3706-4077-8dc9-5fe77dbb6dc3": Phase="Pending", Reason="", readiness=false. Elapsed: 32.524143ms
Mar 30 22:03:31.584: INFO: Pod "downwardapi-volume-9645c256-3706-4077-8dc9-5fe77dbb6dc3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063366062s
STEP: Saw pod success
Mar 30 22:03:31.584: INFO: Pod "downwardapi-volume-9645c256-3706-4077-8dc9-5fe77dbb6dc3" satisfied condition "Succeeded or Failed"
Mar 30 22:03:31.614: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod downwardapi-volume-9645c256-3706-4077-8dc9-5fe77dbb6dc3 container client-container: <nil>
STEP: delete the pod
Mar 30 22:03:31.694: INFO: Waiting for pod downwardapi-volume-9645c256-3706-4077-8dc9-5fe77dbb6dc3 to disappear
Mar 30 22:03:31.724: INFO: Pod downwardapi-volume-9645c256-3706-4077-8dc9-5fe77dbb6dc3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 30 22:03:31.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6879" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":283,"completed":156,"skipped":2662,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 22:03:31.812: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
Mar 30 22:03:31.975: INFO: Waiting up to 5m0s for pod "pod-fa4f58dd-2992-4d44-b0f6-e49d55be129a" in namespace "emptydir-9163" to be "Succeeded or Failed"
Mar 30 22:03:32.006: INFO: Pod "pod-fa4f58dd-2992-4d44-b0f6-e49d55be129a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.345697ms
Mar 30 22:03:34.037: INFO: Pod "pod-fa4f58dd-2992-4d44-b0f6-e49d55be129a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061191001s
STEP: Saw pod success
Mar 30 22:03:34.037: INFO: Pod "pod-fa4f58dd-2992-4d44-b0f6-e49d55be129a" satisfied condition "Succeeded or Failed"
Mar 30 22:03:34.067: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-fa4f58dd-2992-4d44-b0f6-e49d55be129a container test-container: <nil>
STEP: delete the pod
Mar 30 22:03:34.142: INFO: Waiting for pod pod-fa4f58dd-2992-4d44-b0f6-e49d55be129a to disappear
Mar 30 22:03:34.172: INFO: Pod pod-fa4f58dd-2992-4d44-b0f6-e49d55be129a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 22:03:34.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9163" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":157,"skipped":2679,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Mar 30 22:03:34.267: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override command
Mar 30 22:03:34.443: INFO: Waiting up to 5m0s for pod "client-containers-9c21af15-1035-4bfc-ad93-17077f336df3" in namespace "containers-6186" to be "Succeeded or Failed"
Mar 30 22:03:34.474: INFO: Pod "client-containers-9c21af15-1035-4bfc-ad93-17077f336df3": Phase="Pending", Reason="", readiness=false. Elapsed: 31.080796ms
Mar 30 22:03:36.505: INFO: Pod "client-containers-9c21af15-1035-4bfc-ad93-17077f336df3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062555939s
STEP: Saw pod success
Mar 30 22:03:36.505: INFO: Pod "client-containers-9c21af15-1035-4bfc-ad93-17077f336df3" satisfied condition "Succeeded or Failed"
Mar 30 22:03:36.535: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod client-containers-9c21af15-1035-4bfc-ad93-17077f336df3 container test-container: <nil>
STEP: delete the pod
Mar 30 22:03:36.605: INFO: Waiting for pod client-containers-9c21af15-1035-4bfc-ad93-17077f336df3 to disappear
Mar 30 22:03:36.635: INFO: Pod client-containers-9c21af15-1035-4bfc-ad93-17077f336df3 no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 30 22:03:36.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6186" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":283,"completed":158,"skipped":2690,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 30 22:03:39.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1857" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":283,"completed":159,"skipped":2705,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 46 lines ...
Mar 30 22:04:00.777: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2236/pods","resourceVersion":"17569"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 30 22:04:00.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2236" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":283,"completed":160,"skipped":2750,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 11 lines ...
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Mar 30 22:04:03.799: INFO: Successfully updated pod "pod-update-activedeadlineseconds-cf037165-b597-4f0c-b3c1-e62575789940"
Mar 30 22:04:03.799: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-cf037165-b597-4f0c-b3c1-e62575789940" in namespace "pods-2356" to be "terminated due to deadline exceeded"
Mar 30 22:04:03.829: INFO: Pod "pod-update-activedeadlineseconds-cf037165-b597-4f0c-b3c1-e62575789940": Phase="Running", Reason="", readiness=true. Elapsed: 29.392985ms
Mar 30 22:04:05.859: INFO: Pod "pod-update-activedeadlineseconds-cf037165-b597-4f0c-b3c1-e62575789940": Phase="Running", Reason="", readiness=true. Elapsed: 2.059337155s
Mar 30 22:04:07.889: INFO: Pod "pod-update-activedeadlineseconds-cf037165-b597-4f0c-b3c1-e62575789940": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.089367253s
Mar 30 22:04:07.889: INFO: Pod "pod-update-activedeadlineseconds-cf037165-b597-4f0c-b3c1-e62575789940" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 30 22:04:07.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2356" for this suite.
•{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":283,"completed":161,"skipped":2783,"failed":0}
SSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 19 lines ...
Mar 30 22:04:11.377: INFO: Pod "adopt-release-jzpwg": Phase="Running", Reason="", readiness=true. Elapsed: 33.858155ms
Mar 30 22:04:11.377: INFO: Pod "adopt-release-jzpwg" satisfied condition "released"
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Mar 30 22:04:11.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-1013" for this suite.
•{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":283,"completed":162,"skipped":2788,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-0aa78313-a6c6-42b9-8bc3-cb4daf7f2cb5
STEP: Creating a pod to test consume configMaps
Mar 30 22:04:11.663: INFO: Waiting up to 5m0s for pod "pod-configmaps-514474cd-6d87-42b7-b8d3-7b002c5b6159" in namespace "configmap-6552" to be "Succeeded or Failed"
Mar 30 22:04:11.696: INFO: Pod "pod-configmaps-514474cd-6d87-42b7-b8d3-7b002c5b6159": Phase="Pending", Reason="", readiness=false. Elapsed: 32.585337ms
Mar 30 22:04:13.726: INFO: Pod "pod-configmaps-514474cd-6d87-42b7-b8d3-7b002c5b6159": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062502001s
STEP: Saw pod success
Mar 30 22:04:13.726: INFO: Pod "pod-configmaps-514474cd-6d87-42b7-b8d3-7b002c5b6159" satisfied condition "Succeeded or Failed"
Mar 30 22:04:13.755: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-configmaps-514474cd-6d87-42b7-b8d3-7b002c5b6159 container configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 22:04:13.828: INFO: Waiting for pod pod-configmaps-514474cd-6d87-42b7-b8d3-7b002c5b6159 to disappear
Mar 30 22:04:13.858: INFO: Pod pod-configmaps-514474cd-6d87-42b7-b8d3-7b002c5b6159 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 22:04:13.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6552" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":163,"skipped":2828,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 22:04:13.944: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
Mar 30 22:04:14.104: INFO: Waiting up to 5m0s for pod "pod-b5638ef6-1f79-450e-a0a9-5b7adfab93ad" in namespace "emptydir-3555" to be "Succeeded or Failed"
Mar 30 22:04:14.135: INFO: Pod "pod-b5638ef6-1f79-450e-a0a9-5b7adfab93ad": Phase="Pending", Reason="", readiness=false. Elapsed: 30.848868ms
Mar 30 22:04:16.164: INFO: Pod "pod-b5638ef6-1f79-450e-a0a9-5b7adfab93ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060338669s
STEP: Saw pod success
Mar 30 22:04:16.164: INFO: Pod "pod-b5638ef6-1f79-450e-a0a9-5b7adfab93ad" satisfied condition "Succeeded or Failed"
Mar 30 22:04:16.197: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-b5638ef6-1f79-450e-a0a9-5b7adfab93ad container test-container: <nil>
STEP: delete the pod
Mar 30 22:04:16.276: INFO: Waiting for pod pod-b5638ef6-1f79-450e-a0a9-5b7adfab93ad to disappear
Mar 30 22:04:16.306: INFO: Pod pod-b5638ef6-1f79-450e-a0a9-5b7adfab93ad no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 22:04:16.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3555" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":164,"skipped":2838,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-secret-kpbc
STEP: Creating a pod to test atomic-volume-subpath
Mar 30 22:04:16.628: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-kpbc" in namespace "subpath-3346" to be "Succeeded or Failed"
Mar 30 22:04:16.670: INFO: Pod "pod-subpath-test-secret-kpbc": Phase="Pending", Reason="", readiness=false. Elapsed: 42.329704ms
Mar 30 22:04:18.700: INFO: Pod "pod-subpath-test-secret-kpbc": Phase="Running", Reason="", readiness=true. Elapsed: 2.072380822s
Mar 30 22:04:20.730: INFO: Pod "pod-subpath-test-secret-kpbc": Phase="Running", Reason="", readiness=true. Elapsed: 4.102284236s
Mar 30 22:04:22.759: INFO: Pod "pod-subpath-test-secret-kpbc": Phase="Running", Reason="", readiness=true. Elapsed: 6.131307819s
Mar 30 22:04:24.789: INFO: Pod "pod-subpath-test-secret-kpbc": Phase="Running", Reason="", readiness=true. Elapsed: 8.161319003s
Mar 30 22:04:26.819: INFO: Pod "pod-subpath-test-secret-kpbc": Phase="Running", Reason="", readiness=true. Elapsed: 10.191555521s
Mar 30 22:04:28.849: INFO: Pod "pod-subpath-test-secret-kpbc": Phase="Running", Reason="", readiness=true. Elapsed: 12.221247764s
Mar 30 22:04:30.879: INFO: Pod "pod-subpath-test-secret-kpbc": Phase="Running", Reason="", readiness=true. Elapsed: 14.2513185s
Mar 30 22:04:32.908: INFO: Pod "pod-subpath-test-secret-kpbc": Phase="Running", Reason="", readiness=true. Elapsed: 16.280760447s
Mar 30 22:04:34.943: INFO: Pod "pod-subpath-test-secret-kpbc": Phase="Running", Reason="", readiness=true. Elapsed: 18.314883263s
Mar 30 22:04:36.973: INFO: Pod "pod-subpath-test-secret-kpbc": Phase="Running", Reason="", readiness=true. Elapsed: 20.345620063s
Mar 30 22:04:39.005: INFO: Pod "pod-subpath-test-secret-kpbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.377127205s
STEP: Saw pod success
Mar 30 22:04:39.005: INFO: Pod "pod-subpath-test-secret-kpbc" satisfied condition "Succeeded or Failed"
Mar 30 22:04:39.036: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-subpath-test-secret-kpbc container test-container-subpath-secret-kpbc: <nil>
STEP: delete the pod
Mar 30 22:04:39.113: INFO: Waiting for pod pod-subpath-test-secret-kpbc to disappear
Mar 30 22:04:39.145: INFO: Pod pod-subpath-test-secret-kpbc no longer exists
STEP: Deleting pod pod-subpath-test-secret-kpbc
Mar 30 22:04:39.145: INFO: Deleting pod "pod-subpath-test-secret-kpbc" in namespace "subpath-3346"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 30 22:04:39.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3346" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":283,"completed":165,"skipped":2889,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 22:04:39.270: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar 30 22:04:39.430: INFO: Waiting up to 5m0s for pod "pod-d3406650-4313-4cb2-9367-e018b0029085" in namespace "emptydir-1370" to be "Succeeded or Failed"
Mar 30 22:04:39.468: INFO: Pod "pod-d3406650-4313-4cb2-9367-e018b0029085": Phase="Pending", Reason="", readiness=false. Elapsed: 37.048709ms
Mar 30 22:04:41.499: INFO: Pod "pod-d3406650-4313-4cb2-9367-e018b0029085": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.068453895s
STEP: Saw pod success
Mar 30 22:04:41.499: INFO: Pod "pod-d3406650-4313-4cb2-9367-e018b0029085" satisfied condition "Succeeded or Failed"
Mar 30 22:04:41.529: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-d3406650-4313-4cb2-9367-e018b0029085 container test-container: <nil>
STEP: delete the pod
Mar 30 22:04:41.601: INFO: Waiting for pod pod-d3406650-4313-4cb2-9367-e018b0029085 to disappear
Mar 30 22:04:41.630: INFO: Pod pod-d3406650-4313-4cb2-9367-e018b0029085 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 22:04:41.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1370" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":166,"skipped":2915,"failed":0}

------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name projected-secret-test-bd1dd766-3619-4b66-9aa1-961165f46496
STEP: Creating a pod to test consume secrets
Mar 30 22:04:41.934: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-18ea73e0-b536-4a66-8155-6fb65627b12e" in namespace "projected-8895" to be "Succeeded or Failed"
Mar 30 22:04:41.965: INFO: Pod "pod-projected-secrets-18ea73e0-b536-4a66-8155-6fb65627b12e": Phase="Pending", Reason="", readiness=false. Elapsed: 30.500718ms
Mar 30 22:04:43.996: INFO: Pod "pod-projected-secrets-18ea73e0-b536-4a66-8155-6fb65627b12e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061325693s
STEP: Saw pod success
Mar 30 22:04:43.996: INFO: Pod "pod-projected-secrets-18ea73e0-b536-4a66-8155-6fb65627b12e" satisfied condition "Succeeded or Failed"
Mar 30 22:04:44.026: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-projected-secrets-18ea73e0-b536-4a66-8155-6fb65627b12e container secret-volume-test: <nil>
STEP: delete the pod
Mar 30 22:04:44.101: INFO: Waiting for pod pod-projected-secrets-18ea73e0-b536-4a66-8155-6fb65627b12e to disappear
Mar 30 22:04:44.132: INFO: Pod pod-projected-secrets-18ea73e0-b536-4a66-8155-6fb65627b12e no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 30 22:04:44.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8895" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":283,"completed":167,"skipped":2915,"failed":0}

------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-6ee93ca7-d977-4bde-b16a-23ca6fbe3ba0
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 30 22:04:48.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1433" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":168,"skipped":2915,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 22:04:49.212: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33e9583e-3253-4fac-8102-d0c21842660a" in namespace "downward-api-6567" to be "Succeeded or Failed"
Mar 30 22:04:49.242: INFO: Pod "downwardapi-volume-33e9583e-3253-4fac-8102-d0c21842660a": Phase="Pending", Reason="", readiness=false. Elapsed: 29.230012ms
Mar 30 22:04:51.274: INFO: Pod "downwardapi-volume-33e9583e-3253-4fac-8102-d0c21842660a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06124745s
STEP: Saw pod success
Mar 30 22:04:51.274: INFO: Pod "downwardapi-volume-33e9583e-3253-4fac-8102-d0c21842660a" satisfied condition "Succeeded or Failed"
Mar 30 22:04:51.306: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod downwardapi-volume-33e9583e-3253-4fac-8102-d0c21842660a container client-container: <nil>
STEP: delete the pod
Mar 30 22:04:51.378: INFO: Waiting for pod downwardapi-volume-33e9583e-3253-4fac-8102-d0c21842660a to disappear
Mar 30 22:04:51.415: INFO: Pod downwardapi-volume-33e9583e-3253-4fac-8102-d0c21842660a no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 30 22:04:51.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6567" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":283,"completed":169,"skipped":2919,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Mar 30 22:04:58.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9364" for this suite.
STEP: Destroying namespace "webhook-9364-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":283,"completed":170,"skipped":2942,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Mar 30 22:05:03.637: INFO: stderr: ""
Mar 30 22:05:03.637: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9128-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 22:05:06.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4413" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":283,"completed":171,"skipped":2955,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 22:05:07.172: INFO: Waiting up to 5m0s for pod "downwardapi-volume-200af838-e33b-4a28-b9a9-c63365ea8922" in namespace "projected-971" to be "Succeeded or Failed"
Mar 30 22:05:07.205: INFO: Pod "downwardapi-volume-200af838-e33b-4a28-b9a9-c63365ea8922": Phase="Pending", Reason="", readiness=false. Elapsed: 33.27672ms
Mar 30 22:05:09.235: INFO: Pod "downwardapi-volume-200af838-e33b-4a28-b9a9-c63365ea8922": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062986393s
STEP: Saw pod success
Mar 30 22:05:09.235: INFO: Pod "downwardapi-volume-200af838-e33b-4a28-b9a9-c63365ea8922" satisfied condition "Succeeded or Failed"
Mar 30 22:05:09.265: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod downwardapi-volume-200af838-e33b-4a28-b9a9-c63365ea8922 container client-container: <nil>
STEP: delete the pod
Mar 30 22:05:09.342: INFO: Waiting for pod downwardapi-volume-200af838-e33b-4a28-b9a9-c63365ea8922 to disappear
Mar 30 22:05:09.371: INFO: Pod downwardapi-volume-200af838-e33b-4a28-b9a9-c63365ea8922 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 30 22:05:09.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-971" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":283,"completed":172,"skipped":2963,"failed":0}
SS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Mar 30 22:05:11.759: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 30 22:05:11.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2548" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":283,"completed":173,"skipped":2965,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-f67be1ca-4610-40dd-acfd-ee8e9516e377
STEP: Creating a pod to test consume secrets
Mar 30 22:05:12.126: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3a022e88-b434-43dc-85ba-054daa63b6e9" in namespace "projected-7831" to be "Succeeded or Failed"
Mar 30 22:05:12.159: INFO: Pod "pod-projected-secrets-3a022e88-b434-43dc-85ba-054daa63b6e9": Phase="Pending", Reason="", readiness=false. Elapsed: 32.988695ms
Mar 30 22:05:14.193: INFO: Pod "pod-projected-secrets-3a022e88-b434-43dc-85ba-054daa63b6e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.066731329s
STEP: Saw pod success
Mar 30 22:05:14.193: INFO: Pod "pod-projected-secrets-3a022e88-b434-43dc-85ba-054daa63b6e9" satisfied condition "Succeeded or Failed"
Mar 30 22:05:14.224: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-projected-secrets-3a022e88-b434-43dc-85ba-054daa63b6e9 container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 30 22:05:14.301: INFO: Waiting for pod pod-projected-secrets-3a022e88-b434-43dc-85ba-054daa63b6e9 to disappear
Mar 30 22:05:14.331: INFO: Pod pod-projected-secrets-3a022e88-b434-43dc-85ba-054daa63b6e9 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 30 22:05:14.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7831" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":174,"skipped":2983,"failed":0}
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should have an terminated reason [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 30 22:05:18.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2615" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":283,"completed":175,"skipped":2988,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 22:05:21.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-735" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":176,"skipped":2990,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Mar 30 22:05:27.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5555" for this suite.
STEP: Destroying namespace "webhook-5555-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":283,"completed":177,"skipped":3053,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 20 lines ...
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 30 22:05:47.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5672" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":283,"completed":178,"skipped":3071,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] 
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
... skipping 19 lines ...
Mar 30 22:07:22.241: INFO: Noticed Pod "taint-eviction-b2" gets evicted.
STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
  test/e2e/framework/framework.go:175
Mar 30 22:07:22.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-multiple-pods-3669" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":283,"completed":179,"skipped":3119,"failed":0}
SS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Mar 30 22:07:24.655: INFO: Initial restart count of pod test-webserver-da67f3f0-0cee-47c8-8a89-0d12a70fafe3 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 30 22:11:26.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-817" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":283,"completed":180,"skipped":3121,"failed":0}
S
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Mar 30 22:11:28.678: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 30 22:11:28.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5776" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":283,"completed":181,"skipped":3122,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Mar 30 22:11:28.843: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test env composition
Mar 30 22:11:29.041: INFO: Waiting up to 5m0s for pod "var-expansion-974d9c6e-626d-40b1-8d94-ece02a9fada8" in namespace "var-expansion-9282" to be "Succeeded or Failed"
Mar 30 22:11:29.072: INFO: Pod "var-expansion-974d9c6e-626d-40b1-8d94-ece02a9fada8": Phase="Pending", Reason="", readiness=false. Elapsed: 31.171347ms
Mar 30 22:11:31.102: INFO: Pod "var-expansion-974d9c6e-626d-40b1-8d94-ece02a9fada8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061781466s
STEP: Saw pod success
Mar 30 22:11:31.102: INFO: Pod "var-expansion-974d9c6e-626d-40b1-8d94-ece02a9fada8" satisfied condition "Succeeded or Failed"
Mar 30 22:11:31.131: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod var-expansion-974d9c6e-626d-40b1-8d94-ece02a9fada8 container dapi-container: <nil>
STEP: delete the pod
Mar 30 22:11:31.218: INFO: Waiting for pod var-expansion-974d9c6e-626d-40b1-8d94-ece02a9fada8 to disappear
Mar 30 22:11:31.247: INFO: Pod var-expansion-974d9c6e-626d-40b1-8d94-ece02a9fada8 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 30 22:11:31.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9282" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":283,"completed":182,"skipped":3132,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 30 22:11:31.337: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
Mar 30 22:11:31.467: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 30 22:11:34.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8667" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":283,"completed":183,"skipped":3198,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 30 22:11:45.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1172" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":283,"completed":184,"skipped":3212,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 22:11:45.822: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on node default medium
Mar 30 22:11:45.977: INFO: Waiting up to 5m0s for pod "pod-f6235e12-1f95-4364-85b4-29423b9b5993" in namespace "emptydir-826" to be "Succeeded or Failed"
Mar 30 22:11:46.008: INFO: Pod "pod-f6235e12-1f95-4364-85b4-29423b9b5993": Phase="Pending", Reason="", readiness=false. Elapsed: 30.882673ms
Mar 30 22:11:48.038: INFO: Pod "pod-f6235e12-1f95-4364-85b4-29423b9b5993": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060870936s
STEP: Saw pod success
Mar 30 22:11:48.038: INFO: Pod "pod-f6235e12-1f95-4364-85b4-29423b9b5993" satisfied condition "Succeeded or Failed"
Mar 30 22:11:48.068: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-f6235e12-1f95-4364-85b4-29423b9b5993 container test-container: <nil>
STEP: delete the pod
Mar 30 22:11:48.138: INFO: Waiting for pod pod-f6235e12-1f95-4364-85b4-29423b9b5993 to disappear
Mar 30 22:11:48.167: INFO: Pod pod-f6235e12-1f95-4364-85b4-29423b9b5993 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 22:11:48.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-826" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":185,"skipped":3237,"failed":0}
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 43 lines ...
Mar 30 22:13:39.749: INFO: Waiting for statefulset status.replicas updated to 0
Mar 30 22:13:39.778: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 30 22:13:39.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-757" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":283,"completed":186,"skipped":3244,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 30 22:13:39.957: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
Mar 30 22:15:40.236: INFO: Deleting pod "var-expansion-89290401-f98c-4fbe-8173-78f49644b2dc" in namespace "var-expansion-5366"
Mar 30 22:15:40.268: INFO: Wait up to 5m0s for pod "var-expansion-89290401-f98c-4fbe-8173-78f49644b2dc" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 30 22:15:44.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5366" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":283,"completed":187,"skipped":3251,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 30 22:15:44.416: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Mar 30 22:15:50.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4838" for this suite.
•{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":283,"completed":188,"skipped":3259,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-fdb58e07-fb47-422e-9fae-03961cb7daa2
STEP: Creating a pod to test consume secrets
Mar 30 22:15:50.893: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-552d8d40-48c7-4d0f-b63b-38accde77c52" in namespace "projected-3944" to be "Succeeded or Failed"
Mar 30 22:15:50.922: INFO: Pod "pod-projected-secrets-552d8d40-48c7-4d0f-b63b-38accde77c52": Phase="Pending", Reason="", readiness=false. Elapsed: 29.083417ms
Mar 30 22:15:52.952: INFO: Pod "pod-projected-secrets-552d8d40-48c7-4d0f-b63b-38accde77c52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.05960215s
STEP: Saw pod success
Mar 30 22:15:52.952: INFO: Pod "pod-projected-secrets-552d8d40-48c7-4d0f-b63b-38accde77c52" satisfied condition "Succeeded or Failed"
Mar 30 22:15:52.982: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-projected-secrets-552d8d40-48c7-4d0f-b63b-38accde77c52 container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 30 22:15:53.074: INFO: Waiting for pod pod-projected-secrets-552d8d40-48c7-4d0f-b63b-38accde77c52 to disappear
Mar 30 22:15:53.104: INFO: Pod pod-projected-secrets-552d8d40-48c7-4d0f-b63b-38accde77c52 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 30 22:15:53.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3944" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":189,"skipped":3289,"failed":0}
SSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 30 22:15:53.195: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name secret-emptykey-test-a0044ce6-794f-4c4e-b30d-66e07c4073ac
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Mar 30 22:15:53.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-621" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":283,"completed":190,"skipped":3292,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-196c0b5c-3eb2-4a6b-8047-90fb040e1767
STEP: Creating a pod to test consume secrets
Mar 30 22:15:53.629: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-61b33483-4978-4852-922e-c4cc1648bc6f" in namespace "projected-1987" to be "Succeeded or Failed"
Mar 30 22:15:53.659: INFO: Pod "pod-projected-secrets-61b33483-4978-4852-922e-c4cc1648bc6f": Phase="Pending", Reason="", readiness=false. Elapsed: 29.430604ms
Mar 30 22:15:55.689: INFO: Pod "pod-projected-secrets-61b33483-4978-4852-922e-c4cc1648bc6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059550445s
STEP: Saw pod success
Mar 30 22:15:55.689: INFO: Pod "pod-projected-secrets-61b33483-4978-4852-922e-c4cc1648bc6f" satisfied condition "Succeeded or Failed"
Mar 30 22:15:55.720: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-projected-secrets-61b33483-4978-4852-922e-c4cc1648bc6f container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 30 22:15:55.807: INFO: Waiting for pod pod-projected-secrets-61b33483-4978-4852-922e-c4cc1648bc6f to disappear
Mar 30 22:15:55.838: INFO: Pod pod-projected-secrets-61b33483-4978-4852-922e-c4cc1648bc6f no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 30 22:15:55.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1987" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":191,"skipped":3373,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] PreStop
... skipping 25 lines ...
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  test/e2e/framework/framework.go:175
Mar 30 22:16:05.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-3827" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":283,"completed":192,"skipped":3395,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 26 lines ...
Mar 30 22:16:24.409: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 30 22:16:24.648: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 30 22:16:24.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5748" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":193,"skipped":3407,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 34 lines ...
Mar 30 22:18:38.673: INFO: Deleting pod "var-expansion-f0c568d1-31a3-4a92-a2aa-f35d6e402c14" in namespace "var-expansion-623"
Mar 30 22:18:38.712: INFO: Wait up to 5m0s for pod "var-expansion-f0c568d1-31a3-4a92-a2aa-f35d6e402c14" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 30 22:19:22.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-623" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":283,"completed":194,"skipped":3433,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Mar 30 22:19:22.862: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override all
Mar 30 22:19:23.020: INFO: Waiting up to 5m0s for pod "client-containers-57f7ab55-a9ce-462d-9a8a-775bfc4f09bb" in namespace "containers-5835" to be "Succeeded or Failed"
Mar 30 22:19:23.048: INFO: Pod "client-containers-57f7ab55-a9ce-462d-9a8a-775bfc4f09bb": Phase="Pending", Reason="", readiness=false. Elapsed: 28.507104ms
Mar 30 22:19:25.079: INFO: Pod "client-containers-57f7ab55-a9ce-462d-9a8a-775bfc4f09bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058566231s
STEP: Saw pod success
Mar 30 22:19:25.079: INFO: Pod "client-containers-57f7ab55-a9ce-462d-9a8a-775bfc4f09bb" satisfied condition "Succeeded or Failed"
Mar 30 22:19:25.108: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod client-containers-57f7ab55-a9ce-462d-9a8a-775bfc4f09bb container test-container: <nil>
STEP: delete the pod
Mar 30 22:19:25.197: INFO: Waiting for pod client-containers-57f7ab55-a9ce-462d-9a8a-775bfc4f09bb to disappear
Mar 30 22:19:25.226: INFO: Pod client-containers-57f7ab55-a9ce-462d-9a8a-775bfc4f09bb no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 30 22:19:25.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5835" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":283,"completed":195,"skipped":3460,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 34 lines ...
Mar 30 22:19:30.876: INFO: stdout: "service/rm3 exposed\n"
Mar 30 22:19:30.907: INFO: Service rm3 in namespace kubectl-9918 found.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 22:19:32.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9918" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":283,"completed":196,"skipped":3477,"failed":0}

------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 28 lines ...
Mar 30 22:19:41.131: INFO: stderr: ""
Mar 30 22:19:41.131: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 22:19:41.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1996" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":283,"completed":197,"skipped":3477,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-ad3640f5-0f76-49df-9ab1-361e1bf19707
STEP: Creating a pod to test consume secrets
Mar 30 22:19:41.411: INFO: Waiting up to 5m0s for pod "pod-secrets-7385ef8f-58ed-40ec-be0e-599d583b52b8" in namespace "secrets-4158" to be "Succeeded or Failed"
Mar 30 22:19:41.441: INFO: Pod "pod-secrets-7385ef8f-58ed-40ec-be0e-599d583b52b8": Phase="Pending", Reason="", readiness=false. Elapsed: 29.583375ms
Mar 30 22:19:43.471: INFO: Pod "pod-secrets-7385ef8f-58ed-40ec-be0e-599d583b52b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059626737s
STEP: Saw pod success
Mar 30 22:19:43.471: INFO: Pod "pod-secrets-7385ef8f-58ed-40ec-be0e-599d583b52b8" satisfied condition "Succeeded or Failed"
Mar 30 22:19:43.500: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-secrets-7385ef8f-58ed-40ec-be0e-599d583b52b8 container secret-volume-test: <nil>
STEP: delete the pod
Mar 30 22:19:43.585: INFO: Waiting for pod pod-secrets-7385ef8f-58ed-40ec-be0e-599d583b52b8 to disappear
Mar 30 22:19:43.617: INFO: Pod pod-secrets-7385ef8f-58ed-40ec-be0e-599d583b52b8 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 30 22:19:43.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4158" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":283,"completed":198,"skipped":3478,"failed":0}
S
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected combined
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-projected-all-test-volume-a9fca1ba-0ee0-45bd-8a6b-2842f6790138
STEP: Creating secret with name secret-projected-all-test-volume-0b7be706-39dd-428d-9d31-95cf94f92a91
STEP: Creating a pod to test Check all projections for projected volume plugin
Mar 30 22:19:43.966: INFO: Waiting up to 5m0s for pod "projected-volume-3195cefa-3389-4639-bb8f-abf5f1604a76" in namespace "projected-6106" to be "Succeeded or Failed"
Mar 30 22:19:43.999: INFO: Pod "projected-volume-3195cefa-3389-4639-bb8f-abf5f1604a76": Phase="Pending", Reason="", readiness=false. Elapsed: 33.602912ms
Mar 30 22:19:46.031: INFO: Pod "projected-volume-3195cefa-3389-4639-bb8f-abf5f1604a76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064744444s
STEP: Saw pod success
Mar 30 22:19:46.031: INFO: Pod "projected-volume-3195cefa-3389-4639-bb8f-abf5f1604a76" satisfied condition "Succeeded or Failed"
Mar 30 22:19:46.061: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod projected-volume-3195cefa-3389-4639-bb8f-abf5f1604a76 container projected-all-volume-test: <nil>
STEP: delete the pod
Mar 30 22:19:46.140: INFO: Waiting for pod projected-volume-3195cefa-3389-4639-bb8f-abf5f1604a76 to disappear
Mar 30 22:19:46.174: INFO: Pod projected-volume-3195cefa-3389-4639-bb8f-abf5f1604a76 no longer exists
[AfterEach] [sig-storage] Projected combined
  test/e2e/framework/framework.go:175
Mar 30 22:19:46.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6106" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":283,"completed":199,"skipped":3479,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-a3cb1eee-7108-420c-abb4-c40a0898de85
STEP: Creating a pod to test consume configMaps
Mar 30 22:19:46.456: INFO: Waiting up to 5m0s for pod "pod-configmaps-6826c0d2-f3d8-4dcd-b2cf-d49db25571c8" in namespace "configmap-2327" to be "Succeeded or Failed"
Mar 30 22:19:46.490: INFO: Pod "pod-configmaps-6826c0d2-f3d8-4dcd-b2cf-d49db25571c8": Phase="Pending", Reason="", readiness=false. Elapsed: 33.597496ms
Mar 30 22:19:48.520: INFO: Pod "pod-configmaps-6826c0d2-f3d8-4dcd-b2cf-d49db25571c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063565023s
STEP: Saw pod success
Mar 30 22:19:48.520: INFO: Pod "pod-configmaps-6826c0d2-f3d8-4dcd-b2cf-d49db25571c8" satisfied condition "Succeeded or Failed"
Mar 30 22:19:48.550: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-configmaps-6826c0d2-f3d8-4dcd-b2cf-d49db25571c8 container configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 22:19:48.621: INFO: Waiting for pod pod-configmaps-6826c0d2-f3d8-4dcd-b2cf-d49db25571c8 to disappear
Mar 30 22:19:48.650: INFO: Pod pod-configmaps-6826c0d2-f3d8-4dcd-b2cf-d49db25571c8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 22:19:48.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2327" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":283,"completed":200,"skipped":3513,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-baf4e8d0-53cf-44f6-a871-216ccb30df70
STEP: Creating a pod to test consume secrets
Mar 30 22:19:49.000: INFO: Waiting up to 5m0s for pod "pod-secrets-85556422-2847-4a6e-b37d-1518b0143604" in namespace "secrets-9533" to be "Succeeded or Failed"
Mar 30 22:19:49.031: INFO: Pod "pod-secrets-85556422-2847-4a6e-b37d-1518b0143604": Phase="Pending", Reason="", readiness=false. Elapsed: 31.218457ms
Mar 30 22:19:51.061: INFO: Pod "pod-secrets-85556422-2847-4a6e-b37d-1518b0143604": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061310018s
STEP: Saw pod success
Mar 30 22:19:51.061: INFO: Pod "pod-secrets-85556422-2847-4a6e-b37d-1518b0143604" satisfied condition "Succeeded or Failed"
Mar 30 22:19:51.090: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-secrets-85556422-2847-4a6e-b37d-1518b0143604 container secret-volume-test: <nil>
STEP: delete the pod
Mar 30 22:19:51.162: INFO: Waiting for pod pod-secrets-85556422-2847-4a6e-b37d-1518b0143604 to disappear
Mar 30 22:19:51.191: INFO: Pod pod-secrets-85556422-2847-4a6e-b37d-1518b0143604 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 30 22:19:51.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9533" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":201,"skipped":3521,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 26 lines ...
  test/e2e/framework/framework.go:175
Mar 30 22:19:56.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6833" for this suite.
STEP: Destroying namespace "webhook-6833-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":283,"completed":202,"skipped":3531,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 22:19:57.468: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29ef938b-6cac-4bac-b5f0-e4180f37f431" in namespace "downward-api-9841" to be "Succeeded or Failed"
Mar 30 22:19:57.518: INFO: Pod "downwardapi-volume-29ef938b-6cac-4bac-b5f0-e4180f37f431": Phase="Pending", Reason="", readiness=false. Elapsed: 49.956917ms
Mar 30 22:19:59.610: INFO: Pod "downwardapi-volume-29ef938b-6cac-4bac-b5f0-e4180f37f431": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.142227338s
STEP: Saw pod success
Mar 30 22:19:59.610: INFO: Pod "downwardapi-volume-29ef938b-6cac-4bac-b5f0-e4180f37f431" satisfied condition "Succeeded or Failed"
Mar 30 22:19:59.644: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod downwardapi-volume-29ef938b-6cac-4bac-b5f0-e4180f37f431 container client-container: <nil>
STEP: delete the pod
Mar 30 22:19:59.726: INFO: Waiting for pod downwardapi-volume-29ef938b-6cac-4bac-b5f0-e4180f37f431 to disappear
Mar 30 22:19:59.756: INFO: Pod downwardapi-volume-29ef938b-6cac-4bac-b5f0-e4180f37f431 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 30 22:19:59.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9841" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":283,"completed":203,"skipped":3564,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 30 22:19:59.844: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod with failed condition
STEP: updating the pod
Mar 30 22:22:00.659: INFO: Successfully updated pod "var-expansion-3694aead-9c7f-4f25-b209-865f20980e71"
STEP: waiting for pod running
STEP: deleting the pod gracefully
Mar 30 22:22:02.719: INFO: Deleting pod "var-expansion-3694aead-9c7f-4f25-b209-865f20980e71" in namespace "var-expansion-3325"
Mar 30 22:22:02.753: INFO: Wait up to 5m0s for pod "var-expansion-3694aead-9c7f-4f25-b209-865f20980e71" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 30 22:22:40.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3325" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":283,"completed":204,"skipped":3572,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 30 22:23:02.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5866" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":283,"completed":205,"skipped":3632,"failed":0}
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 30 22:23:02.431: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:135
[It] should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Mar 30 22:23:02.802: INFO: DaemonSet pods can't tolerate node test1-control-plane-qvzgv.c.kubernetes-es-logging.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 30 22:23:02.840: INFO: Number of nodes with available pods: 0
Mar 30 22:23:02.840: INFO: Node test1-md-0-m7pwl.c.kubernetes-es-logging.internal is running more than one daemon pod
Mar 30 22:23:03.897: INFO: DaemonSet pods can't tolerate node test1-control-plane-qvzgv.c.kubernetes-es-logging.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 30 22:23:03.928: INFO: Number of nodes with available pods: 0
Mar 30 22:23:03.928: INFO: Node test1-md-0-m7pwl.c.kubernetes-es-logging.internal is running more than one daemon pod
Mar 30 22:23:04.896: INFO: DaemonSet pods can't tolerate node test1-control-plane-qvzgv.c.kubernetes-es-logging.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 30 22:23:04.927: INFO: Number of nodes with available pods: 2
Mar 30 22:23:04.927: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Mar 30 22:23:05.054: INFO: DaemonSet pods can't tolerate node test1-control-plane-qvzgv.c.kubernetes-es-logging.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 30 22:23:05.087: INFO: Number of nodes with available pods: 1
Mar 30 22:23:05.087: INFO: Node test1-md-0-nfkzj.c.kubernetes-es-logging.internal is running more than one daemon pod
Mar 30 22:23:06.142: INFO: DaemonSet pods can't tolerate node test1-control-plane-qvzgv.c.kubernetes-es-logging.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 30 22:23:06.173: INFO: Number of nodes with available pods: 1
Mar 30 22:23:06.173: INFO: Node test1-md-0-nfkzj.c.kubernetes-es-logging.internal is running more than one daemon pod
Mar 30 22:23:07.143: INFO: DaemonSet pods can't tolerate node test1-control-plane-qvzgv.c.kubernetes-es-logging.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 30 22:23:07.174: INFO: Number of nodes with available pods: 2
Mar 30 22:23:07.174: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:101
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4530, will wait for the garbage collector to delete the pods
Mar 30 22:23:07.346: INFO: Deleting DaemonSet.extensions daemon-set took: 33.206179ms
Mar 30 22:23:07.747: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.247359ms
... skipping 4 lines ...
Mar 30 22:23:20.736: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4530/pods","resourceVersion":"22814"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 30 22:23:20.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4530" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":283,"completed":206,"skipped":3638,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 30 22:23:21.052: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 22:23:21.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2186" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":283,"completed":207,"skipped":3653,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-75c3c507-4e92-4f20-ac6c-dd4595046c9e
STEP: Creating a pod to test consume configMaps
Mar 30 22:23:22.097: INFO: Waiting up to 5m0s for pod "pod-configmaps-697d74c5-b22c-48b6-916d-086654c0dde3" in namespace "configmap-5444" to be "Succeeded or Failed"
Mar 30 22:23:22.126: INFO: Pod "pod-configmaps-697d74c5-b22c-48b6-916d-086654c0dde3": Phase="Pending", Reason="", readiness=false. Elapsed: 28.979578ms
Mar 30 22:23:24.156: INFO: Pod "pod-configmaps-697d74c5-b22c-48b6-916d-086654c0dde3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059334167s
STEP: Saw pod success
Mar 30 22:23:24.156: INFO: Pod "pod-configmaps-697d74c5-b22c-48b6-916d-086654c0dde3" satisfied condition "Succeeded or Failed"
Mar 30 22:23:24.187: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-configmaps-697d74c5-b22c-48b6-916d-086654c0dde3 container configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 22:23:24.274: INFO: Waiting for pod pod-configmaps-697d74c5-b22c-48b6-916d-086654c0dde3 to disappear
Mar 30 22:23:24.303: INFO: Pod pod-configmaps-697d74c5-b22c-48b6-916d-086654c0dde3 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 22:23:24.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5444" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":208,"skipped":3665,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 22:23:24.553: INFO: Waiting up to 5m0s for pod "downwardapi-volume-36d886c2-1054-4d96-aa97-f790d1e2cc01" in namespace "projected-4763" to be "Succeeded or Failed"
Mar 30 22:23:24.582: INFO: Pod "downwardapi-volume-36d886c2-1054-4d96-aa97-f790d1e2cc01": Phase="Pending", Reason="", readiness=false. Elapsed: 29.076673ms
Mar 30 22:23:26.611: INFO: Pod "downwardapi-volume-36d886c2-1054-4d96-aa97-f790d1e2cc01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058765051s
STEP: Saw pod success
Mar 30 22:23:26.611: INFO: Pod "downwardapi-volume-36d886c2-1054-4d96-aa97-f790d1e2cc01" satisfied condition "Succeeded or Failed"
Mar 30 22:23:26.643: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod downwardapi-volume-36d886c2-1054-4d96-aa97-f790d1e2cc01 container client-container: <nil>
STEP: delete the pod
Mar 30 22:23:26.717: INFO: Waiting for pod downwardapi-volume-36d886c2-1054-4d96-aa97-f790d1e2cc01 to disappear
Mar 30 22:23:26.747: INFO: Pod downwardapi-volume-36d886c2-1054-4d96-aa97-f790d1e2cc01 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 30 22:23:26.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4763" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":209,"skipped":3668,"failed":0}
SSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 21 lines ...
Mar 30 22:23:51.053: INFO: The status of Pod test-webserver-11b7b68b-f004-4f9c-933e-f8d3d750a380 is Running (Ready = true)
Mar 30 22:23:51.083: INFO: Container started at 2020-03-30 22:23:27 +0000 UTC, pod became ready at 2020-03-30 22:23:50 +0000 UTC
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 30 22:23:51.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5247" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":283,"completed":210,"skipped":3671,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-4842/configmap-test-2c5c28ed-ab62-4bcd-bff4-ce7039bff65e
STEP: Creating a pod to test consume configMaps
Mar 30 22:23:51.364: INFO: Waiting up to 5m0s for pod "pod-configmaps-0bff0a51-3108-4f7e-9b06-069844651296" in namespace "configmap-4842" to be "Succeeded or Failed"
Mar 30 22:23:51.394: INFO: Pod "pod-configmaps-0bff0a51-3108-4f7e-9b06-069844651296": Phase="Pending", Reason="", readiness=false. Elapsed: 29.916111ms
Mar 30 22:23:53.425: INFO: Pod "pod-configmaps-0bff0a51-3108-4f7e-9b06-069844651296": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060287352s
STEP: Saw pod success
Mar 30 22:23:53.425: INFO: Pod "pod-configmaps-0bff0a51-3108-4f7e-9b06-069844651296" satisfied condition "Succeeded or Failed"
Mar 30 22:23:53.454: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-configmaps-0bff0a51-3108-4f7e-9b06-069844651296 container env-test: <nil>
STEP: delete the pod
Mar 30 22:23:53.524: INFO: Waiting for pod pod-configmaps-0bff0a51-3108-4f7e-9b06-069844651296 to disappear
Mar 30 22:23:53.554: INFO: Pod pod-configmaps-0bff0a51-3108-4f7e-9b06-069844651296 no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 22:23:53.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4842" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":283,"completed":211,"skipped":3682,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Mar 30 22:23:53.644: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's args
Mar 30 22:23:53.810: INFO: Waiting up to 5m0s for pod "var-expansion-5e10b79e-3fae-40c5-b64b-1a14781afa7f" in namespace "var-expansion-8185" to be "Succeeded or Failed"
Mar 30 22:23:53.840: INFO: Pod "var-expansion-5e10b79e-3fae-40c5-b64b-1a14781afa7f": Phase="Pending", Reason="", readiness=false. Elapsed: 29.901898ms
Mar 30 22:23:55.871: INFO: Pod "var-expansion-5e10b79e-3fae-40c5-b64b-1a14781afa7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060583649s
STEP: Saw pod success
Mar 30 22:23:55.871: INFO: Pod "var-expansion-5e10b79e-3fae-40c5-b64b-1a14781afa7f" satisfied condition "Succeeded or Failed"
Mar 30 22:23:55.901: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod var-expansion-5e10b79e-3fae-40c5-b64b-1a14781afa7f container dapi-container: <nil>
STEP: delete the pod
Mar 30 22:23:55.976: INFO: Waiting for pod var-expansion-5e10b79e-3fae-40c5-b64b-1a14781afa7f to disappear
Mar 30 22:23:56.006: INFO: Pod var-expansion-5e10b79e-3fae-40c5-b64b-1a14781afa7f no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 30 22:23:56.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8185" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":283,"completed":212,"skipped":3715,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] HostPath
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test hostPath mode
Mar 30 22:23:56.262: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7230" to be "Succeeded or Failed"
Mar 30 22:23:56.294: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 31.684841ms
Mar 30 22:23:58.326: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064161971s
STEP: Saw pod success
Mar 30 22:23:58.326: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Mar 30 22:23:58.356: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Mar 30 22:23:58.427: INFO: Waiting for pod pod-host-path-test to disappear
Mar 30 22:23:58.456: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  test/e2e/framework/framework.go:175
Mar 30 22:23:58.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-7230" for this suite.
•{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":213,"skipped":3723,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 13 lines ...
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 30 22:24:15.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6948" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":283,"completed":214,"skipped":3725,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 46 lines ...
Mar 30 22:25:28.840: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7460/pods","resourceVersion":"23476"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 30 22:25:28.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7460" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":283,"completed":215,"skipped":3762,"failed":0}
SS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
... skipping 11 lines ...
Mar 30 22:25:31.289: INFO: Trying to dial the pod
Mar 30 22:25:36.383: INFO: Controller my-hostname-basic-5276098f-06b9-495f-8a5b-5209c26c4dc3: Got expected result from replica 1 [my-hostname-basic-5276098f-06b9-495f-8a5b-5209c26c4dc3-hltlv]: "my-hostname-basic-5276098f-06b9-495f-8a5b-5209c26c4dc3-hltlv", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:175
Mar 30 22:25:36.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-472" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":283,"completed":216,"skipped":3764,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 11 lines ...
Mar 30 22:25:38.721: INFO: Trying to dial the pod
Mar 30 22:25:43.815: INFO: Controller my-hostname-basic-ee312bea-9016-4a78-82d1-d1e92addf09d: Got expected result from replica 1 [my-hostname-basic-ee312bea-9016-4a78-82d1-d1e92addf09d-gs6f9]: "my-hostname-basic-ee312bea-9016-4a78-82d1-d1e92addf09d-gs6f9", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 30 22:25:43.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6630" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":283,"completed":217,"skipped":3772,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
Mar 30 22:25:46.767: INFO: Successfully updated pod "labelsupdate3eea331a-a2d9-40af-9b27-887e12138f72"
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 30 22:25:50.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4573" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":283,"completed":218,"skipped":3808,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 12 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1381
STEP: Creating statefulset with conflicting port in namespace statefulset-1381
STEP: Waiting until pod test-pod will start running in namespace statefulset-1381
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1381
Mar 30 22:25:53.330: INFO: Observed stateful pod in namespace: statefulset-1381, name: ss-0, uid: 7d5eb4d9-7b0e-4645-b575-e8d435bee326, status phase: Pending. Waiting for statefulset controller to delete.
Mar 30 22:25:53.795: INFO: Observed stateful pod in namespace: statefulset-1381, name: ss-0, uid: 7d5eb4d9-7b0e-4645-b575-e8d435bee326, status phase: Failed. Waiting for statefulset controller to delete.
Mar 30 22:25:53.806: INFO: Observed stateful pod in namespace: statefulset-1381, name: ss-0, uid: 7d5eb4d9-7b0e-4645-b575-e8d435bee326, status phase: Failed. Waiting for statefulset controller to delete.
Mar 30 22:25:53.814: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1381
STEP: Removing pod with conflicting port in namespace statefulset-1381
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1381 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:110
Mar 30 22:25:55.924: INFO: Deleting all statefulset in ns statefulset-1381
Mar 30 22:25:55.955: INFO: Scaling statefulset ss to 0
Mar 30 22:26:06.085: INFO: Waiting for statefulset status.replicas updated to 0
Mar 30 22:26:06.115: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 30 22:26:06.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1381" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":283,"completed":219,"skipped":3853,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 22:26:06.468: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e9621b29-9d27-4ada-8a2a-deb7235955ea" in namespace "projected-3142" to be "Succeeded or Failed"
Mar 30 22:26:06.498: INFO: Pod "downwardapi-volume-e9621b29-9d27-4ada-8a2a-deb7235955ea": Phase="Pending", Reason="", readiness=false. Elapsed: 29.707692ms
Mar 30 22:26:08.529: INFO: Pod "downwardapi-volume-e9621b29-9d27-4ada-8a2a-deb7235955ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06044631s
STEP: Saw pod success
Mar 30 22:26:08.529: INFO: Pod "downwardapi-volume-e9621b29-9d27-4ada-8a2a-deb7235955ea" satisfied condition "Succeeded or Failed"
Mar 30 22:26:08.559: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod downwardapi-volume-e9621b29-9d27-4ada-8a2a-deb7235955ea container client-container: <nil>
STEP: delete the pod
Mar 30 22:26:08.635: INFO: Waiting for pod downwardapi-volume-e9621b29-9d27-4ada-8a2a-deb7235955ea to disappear
Mar 30 22:26:08.666: INFO: Pod downwardapi-volume-e9621b29-9d27-4ada-8a2a-deb7235955ea no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 30 22:26:08.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3142" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":283,"completed":220,"skipped":3867,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-edf947ea-b288-4227-aa33-3067dbff25bf
STEP: Creating a pod to test consume secrets
Mar 30 22:26:09.122: INFO: Waiting up to 5m0s for pod "pod-secrets-8f1a3078-018f-4a11-92e9-b910ca4a327f" in namespace "secrets-288" to be "Succeeded or Failed"
Mar 30 22:26:09.152: INFO: Pod "pod-secrets-8f1a3078-018f-4a11-92e9-b910ca4a327f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.126574ms
Mar 30 22:26:11.183: INFO: Pod "pod-secrets-8f1a3078-018f-4a11-92e9-b910ca4a327f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061169693s
STEP: Saw pod success
Mar 30 22:26:11.183: INFO: Pod "pod-secrets-8f1a3078-018f-4a11-92e9-b910ca4a327f" satisfied condition "Succeeded or Failed"
Mar 30 22:26:11.213: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-secrets-8f1a3078-018f-4a11-92e9-b910ca4a327f container secret-volume-test: <nil>
STEP: delete the pod
Mar 30 22:26:11.284: INFO: Waiting for pod pod-secrets-8f1a3078-018f-4a11-92e9-b910ca4a327f to disappear
Mar 30 22:26:11.315: INFO: Pod pod-secrets-8f1a3078-018f-4a11-92e9-b910ca4a327f no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 30 22:26:11.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-288" for this suite.
STEP: Destroying namespace "secret-namespace-7665" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":283,"completed":221,"skipped":3910,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...

W0330 22:26:17.826168   26158 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 30 22:26:17.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9793" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":283,"completed":222,"skipped":3916,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 30 22:26:18.054: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-80931fad-ea8a-409c-8dda-4857486c3939" in namespace "security-context-test-5031" to be "Succeeded or Failed"
Mar 30 22:26:18.082: INFO: Pod "alpine-nnp-false-80931fad-ea8a-409c-8dda-4857486c3939": Phase="Pending", Reason="", readiness=false. Elapsed: 28.642279ms
Mar 30 22:26:20.113: INFO: Pod "alpine-nnp-false-80931fad-ea8a-409c-8dda-4857486c3939": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059468954s
Mar 30 22:26:22.143: INFO: Pod "alpine-nnp-false-80931fad-ea8a-409c-8dda-4857486c3939": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089494126s
Mar 30 22:26:22.143: INFO: Pod "alpine-nnp-false-80931fad-ea8a-409c-8dda-4857486c3939" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 30 22:26:22.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5031" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":223,"skipped":3931,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Mar 30 22:26:27.366: INFO: stderr: ""
Mar 30 22:26:27.366: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6735-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t<string>\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t<string>\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t<Object>\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t<map[string]>\n     Specification of Waldo\n\n   status\t<Object>\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 22:26:31.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7667" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":283,"completed":224,"skipped":3949,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 14 lines ...
Mar 30 22:26:35.916: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  test/e2e/framework/framework.go:597
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 22:26:48.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2634" for this suite.
STEP: Destroying namespace "webhook-2634-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":283,"completed":225,"skipped":3955,"failed":0}
SS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 30 22:26:49.085: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-9a0da841-6646-481a-a391-c76ded516bdb" in namespace "security-context-test-6570" to be "Succeeded or Failed"
Mar 30 22:26:49.119: INFO: Pod "busybox-readonly-false-9a0da841-6646-481a-a391-c76ded516bdb": Phase="Pending", Reason="", readiness=false. Elapsed: 33.28815ms
Mar 30 22:26:51.148: INFO: Pod "busybox-readonly-false-9a0da841-6646-481a-a391-c76ded516bdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062487029s
Mar 30 22:26:51.148: INFO: Pod "busybox-readonly-false-9a0da841-6646-481a-a391-c76ded516bdb" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 30 22:26:51.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6570" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":283,"completed":226,"skipped":3957,"failed":0}
S
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] LimitRange
... skipping 31 lines ...
Mar 30 22:26:58.859: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  test/e2e/framework/framework.go:175
Mar 30 22:26:58.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-2564" for this suite.
•{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":283,"completed":227,"skipped":3958,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-12be8929-2a08-4a92-a098-3584e1d4f8cf
STEP: Creating a pod to test consume configMaps
Mar 30 22:26:59.206: INFO: Waiting up to 5m0s for pod "pod-configmaps-711acce7-121b-4fe8-bf32-dc6bfaed234b" in namespace "configmap-4679" to be "Succeeded or Failed"
Mar 30 22:26:59.239: INFO: Pod "pod-configmaps-711acce7-121b-4fe8-bf32-dc6bfaed234b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.964153ms
Mar 30 22:27:01.270: INFO: Pod "pod-configmaps-711acce7-121b-4fe8-bf32-dc6bfaed234b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063586008s
STEP: Saw pod success
Mar 30 22:27:01.270: INFO: Pod "pod-configmaps-711acce7-121b-4fe8-bf32-dc6bfaed234b" satisfied condition "Succeeded or Failed"
Mar 30 22:27:01.300: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-configmaps-711acce7-121b-4fe8-bf32-dc6bfaed234b container configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 22:27:01.373: INFO: Waiting for pod pod-configmaps-711acce7-121b-4fe8-bf32-dc6bfaed234b to disappear
Mar 30 22:27:01.403: INFO: Pod pod-configmaps-711acce7-121b-4fe8-bf32-dc6bfaed234b no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 22:27:01.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4679" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":283,"completed":228,"skipped":3975,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 17 lines ...
Mar 30 22:29:34.063: INFO: Restart count of pod container-probe-6925/liveness-5b845de9-5136-4ee0-9566-92b27df3f999 is now 5 (2m30.323752281s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 30 22:29:34.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6925" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":283,"completed":229,"skipped":4004,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 19 lines ...
Mar 30 22:29:37.168: INFO: stderr: ""
Mar 30 22:29:37.168: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 22:29:37.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4359" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":283,"completed":230,"skipped":4040,"failed":0}
SSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 30 22:29:37.256: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
Mar 30 22:31:37.504: INFO: Deleting pod "var-expansion-ca8d081a-2559-4721-ba01-60a9444e788e" in namespace "var-expansion-400"
Mar 30 22:31:37.540: INFO: Wait up to 5m0s for pod "var-expansion-ca8d081a-2559-4721-ba01-60a9444e788e" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 30 22:31:39.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-400" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":283,"completed":231,"skipped":4043,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 30 22:31:51.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4316" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":283,"completed":232,"skipped":4051,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 22:31:51.143: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on tmpfs
Mar 30 22:31:51.306: INFO: Waiting up to 5m0s for pod "pod-4efb927d-5172-4f5a-9f5d-fd6b2816e343" in namespace "emptydir-8776" to be "Succeeded or Failed"
Mar 30 22:31:51.335: INFO: Pod "pod-4efb927d-5172-4f5a-9f5d-fd6b2816e343": Phase="Pending", Reason="", readiness=false. Elapsed: 29.288631ms
Mar 30 22:31:53.365: INFO: Pod "pod-4efb927d-5172-4f5a-9f5d-fd6b2816e343": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059345699s
STEP: Saw pod success
Mar 30 22:31:53.365: INFO: Pod "pod-4efb927d-5172-4f5a-9f5d-fd6b2816e343" satisfied condition "Succeeded or Failed"
Mar 30 22:31:53.395: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-4efb927d-5172-4f5a-9f5d-fd6b2816e343 container test-container: <nil>
STEP: delete the pod
Mar 30 22:31:53.480: INFO: Waiting for pod pod-4efb927d-5172-4f5a-9f5d-fd6b2816e343 to disappear
Mar 30 22:31:53.509: INFO: Pod pod-4efb927d-5172-4f5a-9f5d-fd6b2816e343 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 22:31:53.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8776" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":233,"skipped":4061,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Mar 30 22:31:53.597: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's command
Mar 30 22:31:53.764: INFO: Waiting up to 5m0s for pod "var-expansion-f49aa7af-216d-48f2-9703-dab062a7a64b" in namespace "var-expansion-8584" to be "Succeeded or Failed"
Mar 30 22:31:53.794: INFO: Pod "var-expansion-f49aa7af-216d-48f2-9703-dab062a7a64b": Phase="Pending", Reason="", readiness=false. Elapsed: 29.593803ms
Mar 30 22:31:55.824: INFO: Pod "var-expansion-f49aa7af-216d-48f2-9703-dab062a7a64b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060502679s
STEP: Saw pod success
Mar 30 22:31:55.824: INFO: Pod "var-expansion-f49aa7af-216d-48f2-9703-dab062a7a64b" satisfied condition "Succeeded or Failed"
Mar 30 22:31:55.855: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod var-expansion-f49aa7af-216d-48f2-9703-dab062a7a64b container dapi-container: <nil>
STEP: delete the pod
Mar 30 22:31:55.938: INFO: Waiting for pod var-expansion-f49aa7af-216d-48f2-9703-dab062a7a64b to disappear
Mar 30 22:31:55.968: INFO: Pod var-expansion-f49aa7af-216d-48f2-9703-dab062a7a64b no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 30 22:31:55.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8584" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":283,"completed":234,"skipped":4071,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 22:31:56.054: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 30 22:31:56.215: INFO: Waiting up to 5m0s for pod "pod-a17cf3aa-3aa5-4941-a767-aaa870f49c00" in namespace "emptydir-8935" to be "Succeeded or Failed"
Mar 30 22:31:56.250: INFO: Pod "pod-a17cf3aa-3aa5-4941-a767-aaa870f49c00": Phase="Pending", Reason="", readiness=false. Elapsed: 35.202596ms
Mar 30 22:31:58.281: INFO: Pod "pod-a17cf3aa-3aa5-4941-a767-aaa870f49c00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.066126427s
STEP: Saw pod success
Mar 30 22:31:58.281: INFO: Pod "pod-a17cf3aa-3aa5-4941-a767-aaa870f49c00" satisfied condition "Succeeded or Failed"
Mar 30 22:31:58.311: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-a17cf3aa-3aa5-4941-a767-aaa870f49c00 container test-container: <nil>
STEP: delete the pod
Mar 30 22:31:58.393: INFO: Waiting for pod pod-a17cf3aa-3aa5-4941-a767-aaa870f49c00 to disappear
Mar 30 22:31:58.422: INFO: Pod pod-a17cf3aa-3aa5-4941-a767-aaa870f49c00 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 22:31:58.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8935" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":235,"skipped":4072,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Mar 30 22:31:58.637: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 22:31:58.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7455" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":283,"completed":236,"skipped":4084,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 30 22:32:12.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2898" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":283,"completed":237,"skipped":4088,"failed":0}
SSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 30 22:32:14.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8278" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":283,"completed":238,"skipped":4093,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 12 lines ...
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 22:32:15.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-317" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":283,"completed":239,"skipped":4099,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Mar 30 22:32:18.004: INFO: Successfully updated pod "labelsupdate261837c8-6f00-4261-86eb-b8831f15c98c"
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 30 22:32:22.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1665" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":283,"completed":240,"skipped":4108,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 24 lines ...
Mar 30 22:32:23.238: INFO: created pod pod-service-account-nomountsa-nomountspec
Mar 30 22:32:23.238: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
Mar 30 22:32:23.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8733" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":283,"completed":241,"skipped":4124,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-73cb6421-5ab4-4e51-b5bb-5e152cecec79
STEP: Creating a pod to test consume configMaps
Mar 30 22:32:23.530: INFO: Waiting up to 5m0s for pod "pod-configmaps-6f66d059-8866-4262-88cb-cc4b8e2e52cf" in namespace "configmap-6066" to be "Succeeded or Failed"
Mar 30 22:32:23.559: INFO: Pod "pod-configmaps-6f66d059-8866-4262-88cb-cc4b8e2e52cf": Phase="Pending", Reason="", readiness=false. Elapsed: 29.687693ms
Mar 30 22:32:25.590: INFO: Pod "pod-configmaps-6f66d059-8866-4262-88cb-cc4b8e2e52cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060218789s
STEP: Saw pod success
Mar 30 22:32:25.590: INFO: Pod "pod-configmaps-6f66d059-8866-4262-88cb-cc4b8e2e52cf" satisfied condition "Succeeded or Failed"
Mar 30 22:32:25.624: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-configmaps-6f66d059-8866-4262-88cb-cc4b8e2e52cf container configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 22:32:25.700: INFO: Waiting for pod pod-configmaps-6f66d059-8866-4262-88cb-cc4b8e2e52cf to disappear
Mar 30 22:32:25.729: INFO: Pod pod-configmaps-6f66d059-8866-4262-88cb-cc4b8e2e52cf no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 22:32:25.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6066" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":242,"skipped":4126,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 22:32:30.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-1088" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":283,"completed":243,"skipped":4156,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-95b4790a-b0c7-44e9-95ea-2c152047b151
STEP: Creating a pod to test consume configMaps
Mar 30 22:32:31.176: INFO: Waiting up to 5m0s for pod "pod-configmaps-7759e414-cd8c-48fb-a289-76384202ad8f" in namespace "configmap-8222" to be "Succeeded or Failed"
Mar 30 22:32:31.211: INFO: Pod "pod-configmaps-7759e414-cd8c-48fb-a289-76384202ad8f": Phase="Pending", Reason="", readiness=false. Elapsed: 34.861461ms
Mar 30 22:32:33.242: INFO: Pod "pod-configmaps-7759e414-cd8c-48fb-a289-76384202ad8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065895199s
STEP: Saw pod success
Mar 30 22:32:33.242: INFO: Pod "pod-configmaps-7759e414-cd8c-48fb-a289-76384202ad8f" satisfied condition "Succeeded or Failed"
Mar 30 22:32:33.272: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-configmaps-7759e414-cd8c-48fb-a289-76384202ad8f container configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 22:32:33.346: INFO: Waiting for pod pod-configmaps-7759e414-cd8c-48fb-a289-76384202ad8f to disappear
Mar 30 22:32:33.377: INFO: Pod pod-configmaps-7759e414-cd8c-48fb-a289-76384202ad8f no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 22:32:33.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8222" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":283,"completed":244,"skipped":4175,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Mar 30 22:32:33.605: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 30 22:32:37.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5069" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":283,"completed":245,"skipped":4217,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 131 lines ...
Mar 30 22:33:40.760: INFO: ss-1  test1-md-0-m7pwl.c.kubernetes-es-logging.internal  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 22:32:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 22:33:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 22:33:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 22:32:58 +0000 UTC  }]
Mar 30 22:33:40.760: INFO: 
Mar 30 22:33:40.760: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9588
Mar 30 22:33:41.791: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:33:42.064: INFO: rc: 1
Mar 30 22:33:42.064: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Mar 30 22:33:52.065: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:33:52.259: INFO: rc: 1
Mar 30 22:33:52.259: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:34:02.259: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:34:02.452: INFO: rc: 1
Mar 30 22:34:02.452: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:34:12.453: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:34:12.647: INFO: rc: 1
Mar 30 22:34:12.647: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:34:22.647: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:34:22.858: INFO: rc: 1
Mar 30 22:34:22.858: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:34:32.859: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:34:33.052: INFO: rc: 1
Mar 30 22:34:33.052: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:34:43.052: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:34:43.246: INFO: rc: 1
Mar 30 22:34:43.246: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:34:53.247: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:34:53.442: INFO: rc: 1
Mar 30 22:34:53.443: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:35:03.443: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:35:03.632: INFO: rc: 1
Mar 30 22:35:03.632: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:35:13.632: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:35:13.822: INFO: rc: 1
Mar 30 22:35:13.823: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:35:23.823: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:35:24.014: INFO: rc: 1
Mar 30 22:35:24.014: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:35:34.015: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:35:34.211: INFO: rc: 1
Mar 30 22:35:34.211: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:35:44.212: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:35:44.404: INFO: rc: 1
Mar 30 22:35:44.404: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:35:54.404: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:35:54.598: INFO: rc: 1
Mar 30 22:35:54.598: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:36:04.598: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:36:04.800: INFO: rc: 1
Mar 30 22:36:04.800: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:36:14.800: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:36:14.989: INFO: rc: 1
Mar 30 22:36:14.989: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:36:24.989: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:36:25.201: INFO: rc: 1
Mar 30 22:36:25.201: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:36:35.201: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:36:35.626: INFO: rc: 1
Mar 30 22:36:35.626: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:36:45.626: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:36:45.826: INFO: rc: 1
Mar 30 22:36:45.826: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:36:55.826: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:36:56.023: INFO: rc: 1
Mar 30 22:36:56.023: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:37:06.023: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:37:06.223: INFO: rc: 1
Mar 30 22:37:06.224: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:37:16.224: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:37:16.420: INFO: rc: 1
Mar 30 22:37:16.420: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:37:26.420: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:37:26.618: INFO: rc: 1
Mar 30 22:37:26.618: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:37:36.618: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:37:36.820: INFO: rc: 1
Mar 30 22:37:36.820: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:37:46.820: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:37:47.017: INFO: rc: 1
Mar 30 22:37:47.017: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:37:57.018: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:37:57.230: INFO: rc: 1
Mar 30 22:37:57.230: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:38:07.230: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:38:07.442: INFO: rc: 1
Mar 30 22:38:07.442: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:38:17.442: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:38:17.669: INFO: rc: 1
Mar 30 22:38:17.669: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:38:27.670: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:38:27.903: INFO: rc: 1
Mar 30 22:38:27.903: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:38:37.904: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:38:38.126: INFO: rc: 1
Mar 30 22:38:38.126: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 22:38:48.127: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:38:48.322: INFO: rc: 1
Mar 30 22:38:48.322: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: 
Mar 30 22:38:48.322: INFO: Scaling statefulset ss to 0
Mar 30 22:38:48.414: INFO: Waiting for statefulset status.replicas updated to 0
... skipping 13 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:592
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":283,"completed":246,"skipped":4230,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 7 lines ...
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 30 22:38:53.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6394" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":283,"completed":247,"skipped":4236,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Mar 30 22:38:53.747: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig proxy --unix-socket=/tmp/kubectl-proxy-unix925781482/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 22:38:53.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6200" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":283,"completed":248,"skipped":4251,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 27 lines ...
Mar 30 22:39:12.378: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 30 22:39:12.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5853" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":283,"completed":249,"skipped":4285,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 30 22:39:12.636: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 22:39:12.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1216" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":283,"completed":250,"skipped":4298,"failed":0}

------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 30 22:39:23.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4367" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":283,"completed":251,"skipped":4298,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-projected-jt7p
STEP: Creating a pod to test atomic-volume-subpath
Mar 30 22:39:23.725: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-jt7p" in namespace "subpath-8414" to be "Succeeded or Failed"
Mar 30 22:39:23.759: INFO: Pod "pod-subpath-test-projected-jt7p": Phase="Pending", Reason="", readiness=false. Elapsed: 34.026819ms
Mar 30 22:39:25.790: INFO: Pod "pod-subpath-test-projected-jt7p": Phase="Running", Reason="", readiness=true. Elapsed: 2.064834507s
Mar 30 22:39:27.821: INFO: Pod "pod-subpath-test-projected-jt7p": Phase="Running", Reason="", readiness=true. Elapsed: 4.095516861s
Mar 30 22:39:29.852: INFO: Pod "pod-subpath-test-projected-jt7p": Phase="Running", Reason="", readiness=true. Elapsed: 6.127071989s
Mar 30 22:39:31.883: INFO: Pod "pod-subpath-test-projected-jt7p": Phase="Running", Reason="", readiness=true. Elapsed: 8.157516423s
Mar 30 22:39:33.913: INFO: Pod "pod-subpath-test-projected-jt7p": Phase="Running", Reason="", readiness=true. Elapsed: 10.188219285s
Mar 30 22:39:35.944: INFO: Pod "pod-subpath-test-projected-jt7p": Phase="Running", Reason="", readiness=true. Elapsed: 12.218652573s
Mar 30 22:39:37.975: INFO: Pod "pod-subpath-test-projected-jt7p": Phase="Running", Reason="", readiness=true. Elapsed: 14.249486836s
Mar 30 22:39:40.005: INFO: Pod "pod-subpath-test-projected-jt7p": Phase="Running", Reason="", readiness=true. Elapsed: 16.279876827s
Mar 30 22:39:42.036: INFO: Pod "pod-subpath-test-projected-jt7p": Phase="Running", Reason="", readiness=true. Elapsed: 18.310540907s
Mar 30 22:39:44.066: INFO: Pod "pod-subpath-test-projected-jt7p": Phase="Running", Reason="", readiness=true. Elapsed: 20.341216081s
Mar 30 22:39:46.097: INFO: Pod "pod-subpath-test-projected-jt7p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.371670571s
STEP: Saw pod success
Mar 30 22:39:46.097: INFO: Pod "pod-subpath-test-projected-jt7p" satisfied condition "Succeeded or Failed"
Mar 30 22:39:46.126: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-subpath-test-projected-jt7p container test-container-subpath-projected-jt7p: <nil>
STEP: delete the pod
Mar 30 22:39:46.215: INFO: Waiting for pod pod-subpath-test-projected-jt7p to disappear
Mar 30 22:39:46.245: INFO: Pod pod-subpath-test-projected-jt7p no longer exists
STEP: Deleting pod pod-subpath-test-projected-jt7p
Mar 30 22:39:46.245: INFO: Deleting pod "pod-subpath-test-projected-jt7p" in namespace "subpath-8414"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 30 22:39:46.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8414" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":283,"completed":252,"skipped":4313,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 11 lines ...
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 22:40:07.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3331" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":283,"completed":253,"skipped":4328,"failed":0}

------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 30 22:40:24.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-317" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":283,"completed":254,"skipped":4328,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Mar 30 22:40:26.988: INFO: Initial restart count of pod liveness-193c32ce-f847-4152-a737-24b0faafa0c8 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 30 22:44:28.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9099" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":283,"completed":255,"skipped":4342,"failed":0}
SSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 43 lines ...
Mar 30 22:44:47.813: INFO: Pod "test-rollover-deployment-78df7bc796-69hs7" is available:
&Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-69hs7 test-rollover-deployment-78df7bc796- deployment-4521 /api/v1/namespaces/deployment-4521/pods/test-rollover-deployment-78df7bc796-69hs7 4d0b8977-61a2-4823-adb5-76cabcf46794 28547 0 2020-03-30 22:44:35 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:78df7bc796] map[cni.projectcalico.org/podIP:192.168.154.247/32 cni.projectcalico.org/podIPs:192.168.154.247/32] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 6b283636-11b1-458d-91d0-64ae555ed33b 0xc004b34927 0xc004b34928}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mfzx9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mfzx9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mfzx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-nfkzj.c.kubernetes-es-logging.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:44:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:44:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:44:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:44:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:192.168.154.247,StartTime:2020-03-30 22:44:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-30 22:44:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://1a8305cbbdc84cb5bcccbe6a9d2b44816e4deb311daaa08d6689731b87d5e4b9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.154.247,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 30 22:44:47.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4521" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":283,"completed":256,"skipped":4345,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-bac431e0-3f8d-4d9e-bb37-571170e8bab6
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 22:44:52.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3752" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":257,"skipped":4363,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 30 22:45:03.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7292" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":283,"completed":258,"skipped":4373,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 22:45:03.423: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 30 22:45:03.608: INFO: Waiting up to 5m0s for pod "pod-179a5d91-5bd4-46cb-b9cd-cc82c0f0264a" in namespace "emptydir-8823" to be "Succeeded or Failed"
Mar 30 22:45:03.637: INFO: Pod "pod-179a5d91-5bd4-46cb-b9cd-cc82c0f0264a": Phase="Pending", Reason="", readiness=false. Elapsed: 28.908496ms
Mar 30 22:45:05.666: INFO: Pod "pod-179a5d91-5bd4-46cb-b9cd-cc82c0f0264a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.05836911s
STEP: Saw pod success
Mar 30 22:45:05.666: INFO: Pod "pod-179a5d91-5bd4-46cb-b9cd-cc82c0f0264a" satisfied condition "Succeeded or Failed"
Mar 30 22:45:05.697: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-179a5d91-5bd4-46cb-b9cd-cc82c0f0264a container test-container: <nil>
STEP: delete the pod
Mar 30 22:45:05.781: INFO: Waiting for pod pod-179a5d91-5bd4-46cb-b9cd-cc82c0f0264a to disappear
Mar 30 22:45:05.810: INFO: Pod pod-179a5d91-5bd4-46cb-b9cd-cc82c0f0264a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 22:45:05.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8823" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":259,"skipped":4380,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a volume subpath [sig-storage] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Mar 30 22:45:05.898: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [sig-storage] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in volume subpath
Mar 30 22:45:06.055: INFO: Waiting up to 5m0s for pod "var-expansion-54a5f79e-a77f-4dfa-a3d4-8967e1a11759" in namespace "var-expansion-7587" to be "Succeeded or Failed"
Mar 30 22:45:06.086: INFO: Pod "var-expansion-54a5f79e-a77f-4dfa-a3d4-8967e1a11759": Phase="Pending", Reason="", readiness=false. Elapsed: 30.796696ms
Mar 30 22:45:08.116: INFO: Pod "var-expansion-54a5f79e-a77f-4dfa-a3d4-8967e1a11759": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061290915s
STEP: Saw pod success
Mar 30 22:45:08.116: INFO: Pod "var-expansion-54a5f79e-a77f-4dfa-a3d4-8967e1a11759" satisfied condition "Succeeded or Failed"
Mar 30 22:45:08.147: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod var-expansion-54a5f79e-a77f-4dfa-a3d4-8967e1a11759 container dapi-container: <nil>
STEP: delete the pod
Mar 30 22:45:08.221: INFO: Waiting for pod var-expansion-54a5f79e-a77f-4dfa-a3d4-8967e1a11759 to disappear
Mar 30 22:45:08.251: INFO: Pod var-expansion-54a5f79e-a77f-4dfa-a3d4-8967e1a11759 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 30 22:45:08.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7587" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":283,"completed":260,"skipped":4412,"failed":0}

------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 22:45:08.499: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eba5e37a-f4ee-44ac-9598-5c276cb403c0" in namespace "projected-5453" to be "Succeeded or Failed"
Mar 30 22:45:08.533: INFO: Pod "downwardapi-volume-eba5e37a-f4ee-44ac-9598-5c276cb403c0": Phase="Pending", Reason="", readiness=false. Elapsed: 33.978744ms
Mar 30 22:45:10.563: INFO: Pod "downwardapi-volume-eba5e37a-f4ee-44ac-9598-5c276cb403c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063932762s
STEP: Saw pod success
Mar 30 22:45:10.563: INFO: Pod "downwardapi-volume-eba5e37a-f4ee-44ac-9598-5c276cb403c0" satisfied condition "Succeeded or Failed"
Mar 30 22:45:10.593: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod downwardapi-volume-eba5e37a-f4ee-44ac-9598-5c276cb403c0 container client-container: <nil>
STEP: delete the pod
Mar 30 22:45:10.665: INFO: Waiting for pod downwardapi-volume-eba5e37a-f4ee-44ac-9598-5c276cb403c0 to disappear
Mar 30 22:45:10.695: INFO: Pod downwardapi-volume-eba5e37a-f4ee-44ac-9598-5c276cb403c0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 30 22:45:10.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5453" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":283,"completed":261,"skipped":4412,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 15 lines ...
  test/e2e/framework/framework.go:175
Mar 30 22:45:17.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5872" for this suite.
STEP: Destroying namespace "nsdeletetest-7317" for this suite.
Mar 30 22:45:17.411: INFO: Namespace nsdeletetest-7317 was already deleted
STEP: Destroying namespace "nsdeletetest-6262" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":283,"completed":262,"skipped":4420,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 22:45:17.608: INFO: Waiting up to 5m0s for pod "downwardapi-volume-178ca19e-3651-4d75-aa83-32ef45a9496e" in namespace "downward-api-1639" to be "Succeeded or Failed"
Mar 30 22:45:17.637: INFO: Pod "downwardapi-volume-178ca19e-3651-4d75-aa83-32ef45a9496e": Phase="Pending", Reason="", readiness=false. Elapsed: 29.637925ms
Mar 30 22:45:19.668: INFO: Pod "downwardapi-volume-178ca19e-3651-4d75-aa83-32ef45a9496e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059923657s
STEP: Saw pod success
Mar 30 22:45:19.668: INFO: Pod "downwardapi-volume-178ca19e-3651-4d75-aa83-32ef45a9496e" satisfied condition "Succeeded or Failed"
Mar 30 22:45:19.697: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod downwardapi-volume-178ca19e-3651-4d75-aa83-32ef45a9496e container client-container: <nil>
STEP: delete the pod
Mar 30 22:45:19.784: INFO: Waiting for pod downwardapi-volume-178ca19e-3651-4d75-aa83-32ef45a9496e to disappear
Mar 30 22:45:19.819: INFO: Pod downwardapi-volume-178ca19e-3651-4d75-aa83-32ef45a9496e no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 30 22:45:19.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1639" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":283,"completed":263,"skipped":4426,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 22:45:20.068: INFO: Waiting up to 5m0s for pod "downwardapi-volume-19ff835c-eb76-4cc7-bf26-a63ea29f0e8a" in namespace "downward-api-3235" to be "Succeeded or Failed"
Mar 30 22:45:20.100: INFO: Pod "downwardapi-volume-19ff835c-eb76-4cc7-bf26-a63ea29f0e8a": Phase="Pending", Reason="", readiness=false. Elapsed: 31.480561ms
Mar 30 22:45:22.129: INFO: Pod "downwardapi-volume-19ff835c-eb76-4cc7-bf26-a63ea29f0e8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061387863s
STEP: Saw pod success
Mar 30 22:45:22.130: INFO: Pod "downwardapi-volume-19ff835c-eb76-4cc7-bf26-a63ea29f0e8a" satisfied condition "Succeeded or Failed"
Mar 30 22:45:22.161: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod downwardapi-volume-19ff835c-eb76-4cc7-bf26-a63ea29f0e8a container client-container: <nil>
STEP: delete the pod
Mar 30 22:45:22.232: INFO: Waiting for pod downwardapi-volume-19ff835c-eb76-4cc7-bf26-a63ea29f0e8a to disappear
Mar 30 22:45:22.263: INFO: Pod downwardapi-volume-19ff835c-eb76-4cc7-bf26-a63ea29f0e8a no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 30 22:45:22.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3235" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":283,"completed":264,"skipped":4450,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Mar 30 22:45:22.506: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 30 22:45:25.841: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 22:45:38.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1088" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":283,"completed":265,"skipped":4456,"failed":0}

------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 14 lines ...
STEP: verifying the updated pod is in kubernetes
Mar 30 22:45:41.778: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 30 22:45:41.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1501" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":283,"completed":266,"skipped":4456,"failed":0}

------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 3 lines ...
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  test/e2e/common/pods.go:180
[It] should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 30 22:45:44.211: INFO: Waiting up to 5m0s for pod "client-envvars-4afc9e67-8bc7-4f29-9c76-eb19fc18aba3" in namespace "pods-3163" to be "Succeeded or Failed"
Mar 30 22:45:44.248: INFO: Pod "client-envvars-4afc9e67-8bc7-4f29-9c76-eb19fc18aba3": Phase="Pending", Reason="", readiness=false. Elapsed: 36.806132ms
Mar 30 22:45:46.278: INFO: Pod "client-envvars-4afc9e67-8bc7-4f29-9c76-eb19fc18aba3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06706394s
STEP: Saw pod success
Mar 30 22:45:46.279: INFO: Pod "client-envvars-4afc9e67-8bc7-4f29-9c76-eb19fc18aba3" satisfied condition "Succeeded or Failed"
Mar 30 22:45:46.309: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod client-envvars-4afc9e67-8bc7-4f29-9c76-eb19fc18aba3 container env3cont: <nil>
STEP: delete the pod
Mar 30 22:45:46.381: INFO: Waiting for pod client-envvars-4afc9e67-8bc7-4f29-9c76-eb19fc18aba3 to disappear
Mar 30 22:45:46.412: INFO: Pod client-envvars-4afc9e67-8bc7-4f29-9c76-eb19fc18aba3 no longer exists
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 30 22:45:46.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3163" for this suite.
•{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":283,"completed":267,"skipped":4456,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-5301/configmap-test-3a9464f6-5dc8-4568-b896-5605c5a0fc05
STEP: Creating a pod to test consume configMaps
Mar 30 22:45:46.690: INFO: Waiting up to 5m0s for pod "pod-configmaps-d7abc3a9-dbbf-464d-8c03-15677e99a520" in namespace "configmap-5301" to be "Succeeded or Failed"
Mar 30 22:45:46.722: INFO: Pod "pod-configmaps-d7abc3a9-dbbf-464d-8c03-15677e99a520": Phase="Pending", Reason="", readiness=false. Elapsed: 32.007286ms
Mar 30 22:45:48.752: INFO: Pod "pod-configmaps-d7abc3a9-dbbf-464d-8c03-15677e99a520": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062568581s
STEP: Saw pod success
Mar 30 22:45:48.752: INFO: Pod "pod-configmaps-d7abc3a9-dbbf-464d-8c03-15677e99a520" satisfied condition "Succeeded or Failed"
Mar 30 22:45:48.782: INFO: Trying to get logs from node test1-md-0-nfkzj.c.kubernetes-es-logging.internal pod pod-configmaps-d7abc3a9-dbbf-464d-8c03-15677e99a520 container env-test: <nil>
STEP: delete the pod
Mar 30 22:45:48.863: INFO: Waiting for pod pod-configmaps-d7abc3a9-dbbf-464d-8c03-15677e99a520 to disappear
Mar 30 22:45:48.900: INFO: Pod pod-configmaps-d7abc3a9-dbbf-464d-8c03-15677e99a520 no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 22:45:48.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5301" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":283,"completed":268,"skipped":4466,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 34 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 30 22:45:57.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6999" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":283,"completed":269,"skipped":4491,"failed":0}
S
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 30 22:45:58.008: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 30 22:45:58.168: INFO: Waiting up to 5m0s for pod "downward-api-6b944d74-9910-499b-8cc2-00c7ec8b17e5" in namespace "downward-api-4527" to be "Succeeded or Failed"
Mar 30 22:45:58.203: INFO: Pod "downward-api-6b944d74-9910-499b-8cc2-00c7ec8b17e5": Phase="Pending", Reason="", readiness=false. Elapsed: 35.08416ms
Mar 30 22:46:00.233: INFO: Pod "downward-api-6b944d74-9910-499b-8cc2-00c7ec8b17e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064709094s
STEP: Saw pod success
Mar 30 22:46:00.233: INFO: Pod "downward-api-6b944d74-9910-499b-8cc2-00c7ec8b17e5" satisfied condition "Succeeded or Failed"
Mar 30 22:46:00.263: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod downward-api-6b944d74-9910-499b-8cc2-00c7ec8b17e5 container dapi-container: <nil>
STEP: delete the pod
Mar 30 22:46:00.357: INFO: Waiting for pod downward-api-6b944d74-9910-499b-8cc2-00c7ec8b17e5 to disappear
Mar 30 22:46:00.389: INFO: Pod downward-api-6b944d74-9910-499b-8cc2-00c7ec8b17e5 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 30 22:46:00.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4527" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":283,"completed":270,"skipped":4492,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 35 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
W0330 22:46:11.135679   26158 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 30 22:46:11.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6831" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":283,"completed":271,"skipped":4502,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 30 22:46:11.323: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 22:46:12.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1837" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":283,"completed":272,"skipped":4514,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 162 lines ...
Mar 30 22:46:14.800: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Mar 30 22:46:14.800: INFO: Waiting for all frontend pods to be Running.
Mar 30 22:46:19.851: INFO: Waiting for frontend to serve content.
Mar 30 22:46:19.889: INFO: Trying to add a new entry to the guestbook.
Mar 30 22:46:19.927: INFO: Verifying that added entry can be retrieved.
Mar 30 22:46:19.964: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""}
STEP: using delete to clean up resources
Mar 30 22:46:25.001: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig delete --grace-period=0 --force -f - --namespace=kubectl-5917'
Mar 30 22:46:25.229: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 30 22:46:25.229: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Mar 30 22:46:25.229: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig delete --grace-period=0 --force -f - --namespace=kubectl-5917'
... skipping 16 lines ...
Mar 30 22:46:26.256: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 30 22:46:26.256: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 22:46:26.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5917" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":283,"completed":273,"skipped":4529,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-z55h
STEP: Creating a pod to test atomic-volume-subpath
Mar 30 22:46:26.584: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-z55h" in namespace "subpath-3213" to be "Succeeded or Failed"
Mar 30 22:46:26.619: INFO: Pod "pod-subpath-test-configmap-z55h": Phase="Pending", Reason="", readiness=false. Elapsed: 35.864768ms
Mar 30 22:46:28.650: INFO: Pod "pod-subpath-test-configmap-z55h": Phase="Running", Reason="", readiness=true. Elapsed: 2.06626969s
Mar 30 22:46:30.679: INFO: Pod "pod-subpath-test-configmap-z55h": Phase="Running", Reason="", readiness=true. Elapsed: 4.095777343s
Mar 30 22:46:32.709: INFO: Pod "pod-subpath-test-configmap-z55h": Phase="Running", Reason="", readiness=true. Elapsed: 6.12580091s
Mar 30 22:46:34.740: INFO: Pod "pod-subpath-test-configmap-z55h": Phase="Running", Reason="", readiness=true. Elapsed: 8.156149176s
Mar 30 22:46:36.771: INFO: Pod "pod-subpath-test-configmap-z55h": Phase="Running", Reason="", readiness=true. Elapsed: 10.187542671s
Mar 30 22:46:38.801: INFO: Pod "pod-subpath-test-configmap-z55h": Phase="Running", Reason="", readiness=true. Elapsed: 12.217893258s
Mar 30 22:46:40.832: INFO: Pod "pod-subpath-test-configmap-z55h": Phase="Running", Reason="", readiness=true. Elapsed: 14.247959804s
Mar 30 22:46:42.862: INFO: Pod "pod-subpath-test-configmap-z55h": Phase="Running", Reason="", readiness=true. Elapsed: 16.27875329s
Mar 30 22:46:44.893: INFO: Pod "pod-subpath-test-configmap-z55h": Phase="Running", Reason="", readiness=true. Elapsed: 18.309494207s
Mar 30 22:46:46.924: INFO: Pod "pod-subpath-test-configmap-z55h": Phase="Running", Reason="", readiness=true. Elapsed: 20.340161093s
Mar 30 22:46:48.959: INFO: Pod "pod-subpath-test-configmap-z55h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.374950411s
STEP: Saw pod success
Mar 30 22:46:48.959: INFO: Pod "pod-subpath-test-configmap-z55h" satisfied condition "Succeeded or Failed"
Mar 30 22:46:48.990: INFO: Trying to get logs from node test1-md-0-m7pwl.c.kubernetes-es-logging.internal pod pod-subpath-test-configmap-z55h container test-container-subpath-configmap-z55h: <nil>
STEP: delete the pod
Mar 30 22:46:49.069: INFO: Waiting for pod pod-subpath-test-configmap-z55h to disappear
Mar 30 22:46:49.098: INFO: Pod pod-subpath-test-configmap-z55h no longer exists
STEP: Deleting pod pod-subpath-test-configmap-z55h
Mar 30 22:46:49.098: INFO: Deleting pod "pod-subpath-test-configmap-z55h" in namespace "subpath-3213"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 30 22:46:49.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3213" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":283,"completed":274,"skipped":4546,"failed":0}
SSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 30 22:46:54.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3137" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":283,"completed":275,"skipped":4553,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 85 lines ...
Mar 30 22:47:58.760: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar 30 22:47:58.760: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 30 22:47:58.760: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 30 22:47:58.760: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:47:59.152: INFO: rc: 1
Mar 30 22:47:59.152: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "47acfe5ccd16f2288a676fcb4524b2d890faab11d673566f787731c37891158d": cannot exec in a stopped state: unknown

error:
exit status 1
Mar 30 22:48:09.152: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:48:09.428: INFO: rc: 1
Mar 30 22:48:09.428: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Mar 30 22:48:19.428: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:48:19.622: INFO: rc: 1
Mar 30 22:48:19.622: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:48:29.622: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:48:29.815: INFO: rc: 1
Mar 30 22:48:29.815: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:48:39.815: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:48:40.018: INFO: rc: 1
Mar 30 22:48:40.018: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:48:50.018: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:48:50.213: INFO: rc: 1
Mar 30 22:48:50.213: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:49:00.213: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:49:00.418: INFO: rc: 1
Mar 30 22:49:00.418: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:49:10.418: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:49:10.607: INFO: rc: 1
Mar 30 22:49:10.607: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:49:20.607: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:49:20.801: INFO: rc: 1
Mar 30 22:49:20.801: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:49:30.801: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:49:30.995: INFO: rc: 1
Mar 30 22:49:30.995: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:49:40.996: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:49:41.190: INFO: rc: 1
Mar 30 22:49:41.190: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:49:51.190: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:49:51.392: INFO: rc: 1
Mar 30 22:49:51.392: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:50:01.392: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:50:01.584: INFO: rc: 1
Mar 30 22:50:01.584: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:50:11.585: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:50:11.779: INFO: rc: 1
Mar 30 22:50:11.779: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:50:21.780: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:50:21.973: INFO: rc: 1
Mar 30 22:50:21.973: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:50:31.973: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:50:32.166: INFO: rc: 1
Mar 30 22:50:32.166: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:50:42.167: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:50:42.406: INFO: rc: 1
Mar 30 22:50:42.406: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:50:52.407: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:50:52.604: INFO: rc: 1
Mar 30 22:50:52.604: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:51:02.604: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:51:02.811: INFO: rc: 1
Mar 30 22:51:02.812: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:51:12.812: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:51:13.012: INFO: rc: 1
Mar 30 22:51:13.012: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:51:23.012: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:51:23.211: INFO: rc: 1
Mar 30 22:51:23.211: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:51:33.212: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:51:33.407: INFO: rc: 1
Mar 30 22:51:33.407: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:51:43.407: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:51:43.602: INFO: rc: 1
Mar 30 22:51:43.602: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:51:53.602: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:51:53.802: INFO: rc: 1
Mar 30 22:51:53.802: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:52:03.802: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:52:03.999: INFO: rc: 1
Mar 30 22:52:03.999: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:52:13.999: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:52:14.189: INFO: rc: 1
Mar 30 22:52:14.189: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:52:24.189: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:52:24.385: INFO: rc: 1
Mar 30 22:52:24.385: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
{"component":"entrypoint","file":"prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","time":"2020-03-30T22:52:33Z"}
Mar 30 22:52:34.385: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:52:34.585: INFO: rc: 1
Mar 30 22:52:34.585: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 30 22:52:44.585: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 22:52:44.783: INFO: rc: 1
Mar 30 22:52:44.783: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.241.26.221:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
{"component":"entrypoint","file":"prow/entrypoint/run.go:245","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","time":"2020-03-30T22:52:48Z"}