This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2019-09-20 04:03
Elapsed28m51s
Revisionb4cf642803a460ff73fae1bd3d6d1287e16beef6
Refs 82703

No Test Failures!


Error lines from build-log.txt

... skipping 142 lines ...
INFO: 5212 processes: 5133 remote cache hit, 29 processwrapper-sandbox, 50 remote.
INFO: Build completed successfully, 5305 total actions
INFO: Build completed successfully, 5305 total actions
make: Leaving directory '/home/prow/go/src/k8s.io/kubernetes'
2019/09/20 04:10:15 process.go:155: Step 'make -C /home/prow/go/src/k8s.io/kubernetes bazel-release' finished in 6m50.770720285s
2019/09/20 04:10:15 util.go:255: Flushing memory.
2019/09/20 04:10:16 util.go:265: flushMem error (page cache): exit status 1
2019/09/20 04:10:16 process.go:153: Running: /home/prow/go/src/k8s.io/release/push-build.sh --nomock --verbose --noupdatelatest --bucket=kubernetes-release-pull --ci --gcs-suffix=/pull-kubernetes-e2e-gce --allow-dup
push-build.sh: BEGIN main on 43ca315b-db5b-11e9-8563-d28bdca8a776 Fri Sep 20 04:10:16 UTC 2019

$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
INFO: Invocation ID: 1dd84c2a-7828-46ec-a43b-c70b56c3e5b3
Loading: 
... skipping 846 lines ...
Trying to find master named 'e2e-2376e96bca-abe28-master'
Looking for address 'e2e-2376e96bca-abe28-master-ip'
Using master: e2e-2376e96bca-abe28-master (external IP: 35.185.226.224; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

...........Kubernetes cluster created.
Cluster "k8s-boskos-gce-project-13_e2e-2376e96bca-abe28" set.
User "k8s-boskos-gce-project-13_e2e-2376e96bca-abe28" set.
Context "k8s-boskos-gce-project-13_e2e-2376e96bca-abe28" created.
Switched to context "k8s-boskos-gce-project-13_e2e-2376e96bca-abe28".
... skipping 3782 lines ...
STEP: Scaling down replication controller to zero
STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-6095 to 0
STEP: Update service to not tolerate unready services
STEP: Check if pod is unreachable
Sep 20 04:22:09.656: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.185.226.224 --kubeconfig=/workspace/.kube/config exec --namespace=services-6095 execpod-p9scf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/; test "$?" -ne "0"'
Sep 20 04:22:10.490: INFO: rc: 1
Sep 20 04:22:10.490: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.185.226.224 --kubeconfig=/workspace/.kube/config exec --namespace=services-6095 execpod-p9scf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/; test "$?" -ne "0"] []  <nil> NOW: 2019-09-20 04:22:10.377892036 +0000 UTC m=+8.564405919 + curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1
 [] <nil> 0xc0024efe90 exit status 1 <nil> <nil> true [0xc00223c988 0xc00223c9a0 0xc00223c9b8] [0xc00223c988 0xc00223c9a0 0xc00223c9b8] [0xc00223c998 0xc00223c9b0] [0x10efcb0 0x10efcb0] 0xc0013daf00 <nil>}:
Command stdout:
NOW: 2019-09-20 04:22:10.377892036 +0000 UTC m=+8.564405919
stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1

error:
exit status 1
Sep 20 04:22:12.491: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.185.226.224 --kubeconfig=/workspace/.kube/config exec --namespace=services-6095 execpod-p9scf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/; test "$?" -ne "0"'
Sep 20 04:22:13.500: INFO: rc: 1
Sep 20 04:22:13.500: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.185.226.224 --kubeconfig=/workspace/.kube/config exec --namespace=services-6095 execpod-p9scf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/; test "$?" -ne "0"] []  <nil> NOW: 2019-09-20 04:22:13.301247388 +0000 UTC m=+11.487761269 + curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1
 [] <nil> 0xc002458930 exit status 1 <nil> <nil> true [0xc001087230 0xc001087248 0xc001087260] [0xc001087230 0xc001087248 0xc001087260] [0xc001087240 0xc001087258] [0x10efcb0 0x10efcb0] 0xc001b7c3c0 <nil>}:
Command stdout:
NOW: 2019-09-20 04:22:13.301247388 +0000 UTC m=+11.487761269
stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1

error:
exit status 1
Sep 20 04:22:14.491: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.185.226.224 --kubeconfig=/workspace/.kube/config exec --namespace=services-6095 execpod-p9scf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/; test "$?" -ne "0"'
Sep 20 04:22:16.713: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/\n+ test 7 -ne 0\n"
Sep 20 04:22:16.713: INFO: stdout: ""
STEP: Update service to tolerate unready services again
STEP: Check if terminating pod is available through service
Sep 20 04:22:16.800: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.185.226.224 --kubeconfig=/workspace/.kube/config exec --namespace=services-6095 execpod-p9scf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/'
Sep 20 04:22:18.622: INFO: rc: 7
Sep 20 04:22:18.623: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.185.226.224 --kubeconfig=/workspace/.kube/config exec --namespace=services-6095 execpod-p9scf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/] []  <nil>  + curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/
command terminated with exit code 7
 [] <nil> 0xc0024b2840 exit status 7 <nil> <nil> true [0xc00223c9e8 0xc00223ca00 0xc00223ca18] [0xc00223c9e8 0xc00223ca00 0xc00223ca18] [0xc00223c9f8 0xc00223ca10] [0x10efcb0 0x10efcb0] 0xc001e507e0 <nil>}:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/
command terminated with exit code 7

error:
exit status 7
Sep 20 04:22:20.623: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.185.226.224 --kubeconfig=/workspace/.kube/config exec --namespace=services-6095 execpod-p9scf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/'
Sep 20 04:22:22.614: INFO: rc: 7
Sep 20 04:22:22.614: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.185.226.224 --kubeconfig=/workspace/.kube/config exec --namespace=services-6095 execpod-p9scf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/] []  <nil>  + curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/
command terminated with exit code 7
 [] <nil> 0xc0023095f0 exit status 7 <nil> <nil> true [0xc002333060 0xc002333078 0xc002333090] [0xc002333060 0xc002333078 0xc002333090] [0xc002333070 0xc002333088] [0x10efcb0 0x10efcb0] 0xc0018b9200 <nil>}:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/
command terminated with exit code 7

error:
exit status 7
Sep 20 04:22:22.623: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.185.226.224 --kubeconfig=/workspace/.kube/config exec --namespace=services-6095 execpod-p9scf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/'
Sep 20 04:22:24.664: INFO: rc: 7
Sep 20 04:22:24.664: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.185.226.224 --kubeconfig=/workspace/.kube/config exec --namespace=services-6095 execpod-p9scf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/] []  <nil>  + curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/
command terminated with exit code 7
 [] <nil> 0xc002309d10 exit status 7 <nil> <nil> true [0xc002333098 0xc0023330b0 0xc0023330c8] [0xc002333098 0xc0023330b0 0xc0023330c8] [0xc0023330a8 0xc0023330c0] [0x10efcb0 0x10efcb0] 0xc001f6c000 <nil>}:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/
command terminated with exit code 7

error:
exit status 7
Sep 20 04:22:26.623: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.185.226.224 --kubeconfig=/workspace/.kube/config exec --namespace=services-6095 execpod-p9scf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/'
Sep 20 04:22:27.813: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6095.svc.cluster.local:80/\n"
Sep 20 04:22:27.813: INFO: stdout: "NOW: 2019-09-20 04:22:27.722143156 +0000 UTC m=+25.908657040"
STEP: Remove pods immediately
STEP: stopping RC slow-terminating-unready-pod in namespace services-6095
... skipping 1150 lines ...
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 20 04:22:50.266: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-1544
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:698
STEP: Creating configMap that has name configmap-test-emptyKey-0e8d3fd0-845d-4f85-be30-0113353e0fb2
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:152
Sep 20 04:22:50.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1544" for this suite.
Sep 20 04:22:56.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 20 04:22:58.209: INFO: namespace configmap-1544 deletion completed in 7.490098643s


• [SLOW TEST:7.942 seconds]
[sig-node] ConfigMap
test/e2e/common/configmap.go:32
  should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes GCEPD
  test/e2e/framework/framework.go:151
... skipping 142 lines ...
STEP: Deleting the previously created pod
Sep 20 04:22:35.355: INFO: Deleting pod "pvc-volume-tester-gd5zw" in namespace "csi-mock-volumes-7566"
Sep 20 04:22:35.432: INFO: Wait up to 5m0s for pod "pvc-volume-tester-gd5zw" to be fully deleted
STEP: Checking CSI driver logs
Sep 20 04:22:45.567: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7566","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7566","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7566","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-46abd481-23e0-42c2-a977-6c88618ed389","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-46abd481-23e0-42c2-a977-6c88618ed389"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-7566","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-7566","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-46abd481-23e0-42c2-a977-6c88618ed389","storage.kubernetes.io/csiProvisionerIdentity":"1568953336433-8081-csi-mock-csi-mock-volumes-7566"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46abd481-23e0-42c2-a977-6c88618ed389/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-46abd481-23e0-42c2-a977-6c88618ed389","storage.kubernetes.io/csiProvisionerIdentity":"1568953336433-8081-csi-mock-csi-mock-volumes-7566"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46abd481-23e0-42c2-a977-6c88618ed389/globalmount","target_path":"/var/lib/kubelet/pods/7ce863c9-7ba3-48b9-a19b-7acb02d6ba63/volumes/kubernetes.io~csi/pvc-46abd481-23e0-42c2-a977-6c88618ed389/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-46abd481-23e0-42c2-a977-6c88618ed389","storage.kubernetes.io/csiProvisionerIdentity":"1568953336433-8081-csi-mock-csi-mock-volumes-7566"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/7ce863c9-7ba3-48b9-a19b-7acb02d6ba63/volumes/kubernetes.io~csi/pvc-46abd481-23e0-42c2-a977-6c88618ed389/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46abd481-23e0-42c2-a977-6c88618ed389/globalmount"},"Response":{},"Error":""}

Sep 20 04:22:45.568: INFO: Found NodeUnpublishVolume: {Method:/csi.v1.Node/NodeUnpublishVolume Request:{VolumeContext:map[]}}
STEP: Deleting pod pvc-volume-tester-gd5zw
Sep 20 04:22:45.568: INFO: Deleting pod "pvc-volume-tester-gd5zw" in namespace "csi-mock-volumes-7566"
STEP: Deleting claim pvc-4qmcv
Sep 20 04:22:45.678: INFO: Waiting up to 2m0s for PersistentVolume pvc-46abd481-23e0-42c2-a977-6c88618ed389 to get deleted
... skipping 2785 lines ...
Sep 20 04:22:56.606: INFO: Waiting for PV gce-xhks7 to bind to PVC pvc-lnd98
Sep 20 04:22:56.606: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-lnd98] to have phase Bound
Sep 20 04:22:56.651: INFO: PersistentVolumeClaim pvc-lnd98 found and phase=Bound (44.82341ms)
Sep 20 04:22:56.651: INFO: Waiting up to 3m0s for PersistentVolume gce-xhks7 to have phase Bound
Sep 20 04:22:56.690: INFO: PersistentVolume gce-xhks7 found and phase=Bound (39.174074ms)
STEP: Creating the Client Pod
[It] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
  test/e2e/storage/persistent_volumes-gce.go:124
STEP: Deleting the Claim
Sep 20 04:23:21.111: INFO: Deleting PersistentVolumeClaim "pvc-lnd98"
STEP: Deleting the Pod
Sep 20 04:23:21.614: INFO: Deleting pod "pvc-tester-k7vll" in namespace "pv-6982"
Sep 20 04:23:21.895: INFO: Wait up to 5m0s for pod "pvc-tester-k7vll" to be fully deleted
... skipping 16 lines ...
Sep 20 04:23:50.926: INFO: Successfully deleted PD "e2e-2376e96bca-abe28-6c204f64-bd01-45ee-8418-d271bc4a27ad".


• [SLOW TEST:57.404 seconds]
[sig-storage] PersistentVolumes GCEPD
test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
  test/e2e/storage/persistent_volumes-gce.go:124
------------------------------
SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:93
... skipping 353 lines ...
Sep 20 04:22:47.800: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-jtglt] to have phase Bound
Sep 20 04:22:47.838: INFO: PersistentVolumeClaim pvc-jtglt found but phase is Pending instead of Bound.
Sep 20 04:22:49.873: INFO: PersistentVolumeClaim pvc-jtglt found and phase=Bound (2.07251924s)
Sep 20 04:22:49.873: INFO: Waiting up to 3m0s for PersistentVolume gce-mvhkf to have phase Bound
Sep 20 04:22:49.907: INFO: PersistentVolume gce-mvhkf found and phase=Bound (34.161123ms)
STEP: Creating the Client Pod
[It] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach
  test/e2e/storage/persistent_volumes-gce.go:139
STEP: Deleting the Persistent Volume
Sep 20 04:23:20.183: INFO: Deleting PersistentVolume "gce-mvhkf"
STEP: Deleting the client pod
Sep 20 04:23:20.430: INFO: Deleting pod "pvc-tester-jnx42" in namespace "pv-8649"
Sep 20 04:23:20.480: INFO: Wait up to 5m0s for pod "pvc-tester-jnx42" to be fully deleted
... skipping 16 lines ...
Sep 20 04:23:55.509: INFO: Successfully deleted PD "e2e-2376e96bca-abe28-98bf9f12-38fe-4771-8aac-dea26984dc53".


• [SLOW TEST:71.152 seconds]
[sig-storage] PersistentVolumes GCEPD
test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach
  test/e2e/storage/persistent_volumes-gce.go:139
------------------------------
[BeforeEach] [sig-storage] Dynamic Provisioning
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 20 04:21:51.501: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 556 lines ...
Sep 20 04:23:30.779: INFO: ssh prow@34.82.128.63:22: command:   sudo mkdir "/var/lib/kubelet/mount-propagation-8348"/host; sudo mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-8348"/host; echo host > "/var/lib/kubelet/mount-propagation-8348"/host/file
Sep 20 04:23:30.779: INFO: ssh prow@34.82.128.63:22: stdout:    ""
Sep 20 04:23:30.779: INFO: ssh prow@34.82.128.63:22: stderr:    ""
Sep 20 04:23:30.779: INFO: ssh prow@34.82.128.63:22: exit code: 0
Sep 20 04:23:30.816: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8348 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:23:30.816: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:23:31.266: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Sep 20 04:23:31.303: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8348 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:23:31.303: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:23:31.726: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Sep 20 04:23:31.769: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8348 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:23:31.769: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:23:32.175: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Sep 20 04:23:32.213: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8348 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:23:32.213: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:23:32.767: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Sep 20 04:23:32.915: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8348 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:23:32.916: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:23:33.693: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Sep 20 04:23:33.814: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8348 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:23:33.814: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:23:34.274: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Sep 20 04:23:34.326: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8348 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:23:34.326: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:23:34.799: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Sep 20 04:23:34.837: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8348 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:23:34.837: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:23:35.312: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Sep 20 04:23:35.357: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8348 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:23:35.357: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:23:35.931: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Sep 20 04:23:35.970: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8348 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:23:35.970: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:23:36.623: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Sep 20 04:23:36.660: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8348 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:23:36.660: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:23:37.148: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Sep 20 04:23:37.188: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8348 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:23:37.188: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:23:37.654: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Sep 20 04:23:37.693: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8348 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:23:37.693: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:23:38.120: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Sep 20 04:23:38.158: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8348 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:23:38.158: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:23:38.647: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Sep 20 04:23:38.687: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8348 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:23:38.688: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:23:39.133: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Sep 20 04:23:39.173: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8348 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:23:39.173: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:23:39.673: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Sep 20 04:23:39.717: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8348 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:23:39.717: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:23:40.151: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Sep 20 04:23:40.192: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8348 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:23:40.192: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:23:40.789: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Sep 20 04:23:40.842: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8348 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:23:40.842: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:23:42.564: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Sep 20 04:23:42.601: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8348 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:23:42.601: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:23:43.338: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Sep 20 04:23:43.338: INFO: Getting external IP address for e2e-2376e96bca-abe28-minion-group-990f
Sep 20 04:23:43.338: INFO: SSH "test `cat \"/var/lib/kubelet/mount-propagation-8348\"/master/file` = master" on e2e-2376e96bca-abe28-minion-group-990f(34.82.128.63:22)
Sep 20 04:23:43.787: INFO: ssh prow@34.82.128.63:22: command:   test `cat "/var/lib/kubelet/mount-propagation-8348"/master/file` = master
Sep 20 04:23:43.788: INFO: ssh prow@34.82.128.63:22: stdout:    ""
Sep 20 04:23:43.788: INFO: ssh prow@34.82.128.63:22: stderr:    ""
Sep 20 04:23:43.788: INFO: ssh prow@34.82.128.63:22: exit code: 0
... skipping 2416 lines ...
Sep 20 04:23:55.320: INFO: PersistentVolumeClaim csi-hostpathb7p6w found but phase is Pending instead of Bound.
Sep 20 04:23:57.358: INFO: PersistentVolumeClaim csi-hostpathb7p6w found but phase is Pending instead of Bound.
Sep 20 04:23:59.419: INFO: PersistentVolumeClaim csi-hostpathb7p6w found but phase is Pending instead of Bound.
Sep 20 04:24:01.458: INFO: PersistentVolumeClaim csi-hostpathb7p6w found and phase=Bound (8.220085681s)
STEP: Expanding non-expandable pvc
Sep 20 04:24:01.551: INFO: currentPvcSize {{5368709120 0} {<nil>} 5Gi BinarySI}, newSize {{6442450944 0} {<nil>}  BinarySI}
Sep 20 04:24:01.638: INFO: Error updating pvc csi-hostpathb7p6w with persistentvolumeclaims "csi-hostpathb7p6w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:24:03.719: INFO: Error updating pvc csi-hostpathb7p6w with persistentvolumeclaims "csi-hostpathb7p6w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:24:05.728: INFO: Error updating pvc csi-hostpathb7p6w with persistentvolumeclaims "csi-hostpathb7p6w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:24:07.724: INFO: Error updating pvc csi-hostpathb7p6w with persistentvolumeclaims "csi-hostpathb7p6w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:24:09.721: INFO: Error updating pvc csi-hostpathb7p6w with persistentvolumeclaims "csi-hostpathb7p6w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:24:11.716: INFO: Error updating pvc csi-hostpathb7p6w with persistentvolumeclaims "csi-hostpathb7p6w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:24:13.715: INFO: Error updating pvc csi-hostpathb7p6w with persistentvolumeclaims "csi-hostpathb7p6w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:24:15.714: INFO: Error updating pvc csi-hostpathb7p6w with persistentvolumeclaims "csi-hostpathb7p6w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:24:17.717: INFO: Error updating pvc csi-hostpathb7p6w with persistentvolumeclaims "csi-hostpathb7p6w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:24:19.715: INFO: Error updating pvc csi-hostpathb7p6w with persistentvolumeclaims "csi-hostpathb7p6w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:24:21.716: INFO: Error updating pvc csi-hostpathb7p6w with persistentvolumeclaims "csi-hostpathb7p6w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:24:23.719: INFO: Error updating pvc csi-hostpathb7p6w with persistentvolumeclaims "csi-hostpathb7p6w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:24:25.749: INFO: Error updating pvc csi-hostpathb7p6w with persistentvolumeclaims "csi-hostpathb7p6w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:24:27.713: INFO: Error updating pvc csi-hostpathb7p6w with persistentvolumeclaims "csi-hostpathb7p6w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:24:29.716: INFO: Error updating pvc csi-hostpathb7p6w with persistentvolumeclaims "csi-hostpathb7p6w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:24:31.715: INFO: Error updating pvc csi-hostpathb7p6w with persistentvolumeclaims "csi-hostpathb7p6w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:24:31.790: INFO: Error updating pvc csi-hostpathb7p6w with persistentvolumeclaims "csi-hostpathb7p6w" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Sep 20 04:24:31.790: INFO: Deleting PersistentVolumeClaim "csi-hostpathb7p6w"
Sep 20 04:24:31.831: INFO: Waiting up to 5m0s for PersistentVolume pvc-5c683dc7-b5ba-4925-addd-b5aa67971e7c to get deleted
Sep 20 04:24:31.874: INFO: PersistentVolume pvc-5c683dc7-b5ba-4925-addd-b5aa67971e7c found and phase=Released (42.936843ms)
Sep 20 04:24:36.938: INFO: PersistentVolume pvc-5c683dc7-b5ba-4925-addd-b5aa67971e7c was removed
STEP: Deleting sc
... skipping 792 lines ...
Sep 20 04:24:30.461: INFO: Node name not specified for getVolumeOpCounts, falling back to listing nodes from API Server
Sep 20 04:24:31.098: INFO: Creating resource for dynamic PV
STEP: creating a StorageClass volume-expand-3425-gcepd-sctp292
STEP: creating a claim
STEP: Expanding non-expandable pvc
Sep 20 04:24:31.231: INFO: currentPvcSize {{5368709120 0} {<nil>} 5Gi BinarySI}, newSize {{6442450944 0} {<nil>}  BinarySI}
Sep 20 04:24:31.312: INFO: Error updating pvc gcepdrzk8f with PersistentVolumeClaim "gcepdrzk8f" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:24:33.394: INFO: Error updating pvc gcepdrzk8f with PersistentVolumeClaim "gcepdrzk8f" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:24:35.401: INFO: Error updating pvc gcepdrzk8f with PersistentVolumeClaim "gcepdrzk8f" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:24:37.416: INFO: Error updating pvc gcepdrzk8f with PersistentVolumeClaim "gcepdrzk8f" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:24:39.397: INFO: Error updating pvc gcepdrzk8f with PersistentVolumeClaim "gcepdrzk8f" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:24:41.398: INFO: Error updating pvc gcepdrzk8f with PersistentVolumeClaim "gcepdrzk8f" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:24:43.405: INFO: Error updating pvc gcepdrzk8f with PersistentVolumeClaim "gcepdrzk8f" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:24:45.421: INFO: Error updating pvc gcepdrzk8f with PersistentVolumeClaim "gcepdrzk8f" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:24:47.398: INFO: Error updating pvc gcepdrzk8f with PersistentVolumeClaim "gcepdrzk8f" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:24:49.394: INFO: Error updating pvc gcepdrzk8f with PersistentVolumeClaim "gcepdrzk8f" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:24:51.394: INFO: Error updating pvc gcepdrzk8f with PersistentVolumeClaim "gcepdrzk8f" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:24:53.530: INFO: Error updating pvc gcepdrzk8f with PersistentVolumeClaim "gcepdrzk8f" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:24:55.481: INFO: Error updating pvc gcepdrzk8f with PersistentVolumeClaim "gcepdrzk8f" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:24:57.408: INFO: Error updating pvc gcepdrzk8f with PersistentVolumeClaim "gcepdrzk8f" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:24:59.399: INFO: Error updating pvc gcepdrzk8f with PersistentVolumeClaim "gcepdrzk8f" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:25:01.398: INFO: Error updating pvc gcepdrzk8f with PersistentVolumeClaim "gcepdrzk8f" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:25:01.490: INFO: Error updating pvc gcepdrzk8f with PersistentVolumeClaim "gcepdrzk8f" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
STEP: Deleting pvc
Sep 20 04:25:01.490: INFO: Deleting PersistentVolumeClaim "gcepdrzk8f"
STEP: Deleting sc
Sep 20 04:25:01.583: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  test/e2e/framework/framework.go:152
... skipping 749 lines ...
Sep 20 04:24:54.325: INFO: Pod exec-volume-test-gcepd-preprovisionedpv-vv6x no longer exists
STEP: Deleting pod exec-volume-test-gcepd-preprovisionedpv-vv6x
Sep 20 04:24:54.325: INFO: Deleting pod "exec-volume-test-gcepd-preprovisionedpv-vv6x" in namespace "volume-8429"
STEP: Deleting pv and pvc
Sep 20 04:24:54.365: INFO: Deleting PersistentVolumeClaim "pvc-grcs9"
Sep 20 04:24:54.426: INFO: Deleting PersistentVolume "gcepd-7db94"
Sep 20 04:24:55.526: INFO: error deleting PD "e2e-2376e96bca-abe28-fdd8e016-7b20-4f27-a505-fba23e049450": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/disks/e2e-2376e96bca-abe28-fdd8e016-7b20-4f27-a505-fba23e049450' is already being used by 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/instances/e2e-2376e96bca-abe28-minion-group-990f', resourceInUseByAnotherResource
Sep 20 04:24:55.526: INFO: Couldn't delete PD "e2e-2376e96bca-abe28-fdd8e016-7b20-4f27-a505-fba23e049450", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/disks/e2e-2376e96bca-abe28-fdd8e016-7b20-4f27-a505-fba23e049450' is already being used by 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/instances/e2e-2376e96bca-abe28-minion-group-990f', resourceInUseByAnotherResource
Sep 20 04:25:01.472: INFO: error deleting PD "e2e-2376e96bca-abe28-fdd8e016-7b20-4f27-a505-fba23e049450": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/disks/e2e-2376e96bca-abe28-fdd8e016-7b20-4f27-a505-fba23e049450' is already being used by 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/instances/e2e-2376e96bca-abe28-minion-group-990f', resourceInUseByAnotherResource
Sep 20 04:25:01.472: INFO: Couldn't delete PD "e2e-2376e96bca-abe28-fdd8e016-7b20-4f27-a505-fba23e049450", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/disks/e2e-2376e96bca-abe28-fdd8e016-7b20-4f27-a505-fba23e049450' is already being used by 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/instances/e2e-2376e96bca-abe28-minion-group-990f', resourceInUseByAnotherResource
Sep 20 04:25:07.949: INFO: error deleting PD "e2e-2376e96bca-abe28-fdd8e016-7b20-4f27-a505-fba23e049450": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/disks/e2e-2376e96bca-abe28-fdd8e016-7b20-4f27-a505-fba23e049450' is already being used by 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/instances/e2e-2376e96bca-abe28-minion-group-990f', resourceInUseByAnotherResource
Sep 20 04:25:07.949: INFO: Couldn't delete PD "e2e-2376e96bca-abe28-fdd8e016-7b20-4f27-a505-fba23e049450", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/disks/e2e-2376e96bca-abe28-fdd8e016-7b20-4f27-a505-fba23e049450' is already being used by 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/instances/e2e-2376e96bca-abe28-minion-group-990f', resourceInUseByAnotherResource
Sep 20 04:25:15.479: INFO: Successfully deleted PD "e2e-2376e96bca-abe28-fdd8e016-7b20-4f27-a505-fba23e049450".
Sep 20 04:25:15.479: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/framework/framework.go:152
Sep 20 04:25:15.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8429" for this suite.
... skipping 324 lines ...
STEP: cleaning the environment after gcepd
Sep 20 04:24:50.479: INFO: Deleting pod "gcepd-client" in namespace "volume-8384"
Sep 20 04:24:50.520: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
STEP: Deleting pv and pvc
Sep 20 04:25:02.599: INFO: Deleting PersistentVolumeClaim "pvc-tcbvt"
Sep 20 04:25:02.639: INFO: Deleting PersistentVolume "gcepd-wd7cm"
Sep 20 04:25:04.424: INFO: error deleting PD "e2e-2376e96bca-abe28-60391314-9767-424d-b0c7-3f6e5a7e4edb": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/disks/e2e-2376e96bca-abe28-60391314-9767-424d-b0c7-3f6e5a7e4edb' is already being used by 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/instances/e2e-2376e96bca-abe28-minion-group-q21h', resourceInUseByAnotherResource
Sep 20 04:25:04.424: INFO: Couldn't delete PD "e2e-2376e96bca-abe28-60391314-9767-424d-b0c7-3f6e5a7e4edb", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/disks/e2e-2376e96bca-abe28-60391314-9767-424d-b0c7-3f6e5a7e4edb' is already being used by 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/instances/e2e-2376e96bca-abe28-minion-group-q21h', resourceInUseByAnotherResource
Sep 20 04:25:10.889: INFO: error deleting PD "e2e-2376e96bca-abe28-60391314-9767-424d-b0c7-3f6e5a7e4edb": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/disks/e2e-2376e96bca-abe28-60391314-9767-424d-b0c7-3f6e5a7e4edb' is already being used by 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/instances/e2e-2376e96bca-abe28-minion-group-q21h', resourceInUseByAnotherResource
Sep 20 04:25:10.889: INFO: Couldn't delete PD "e2e-2376e96bca-abe28-60391314-9767-424d-b0c7-3f6e5a7e4edb", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/disks/e2e-2376e96bca-abe28-60391314-9767-424d-b0c7-3f6e5a7e4edb' is already being used by 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/instances/e2e-2376e96bca-abe28-minion-group-q21h', resourceInUseByAnotherResource
Sep 20 04:25:18.519: INFO: Successfully deleted PD "e2e-2376e96bca-abe28-60391314-9767-424d-b0c7-3f6e5a7e4edb".
Sep 20 04:25:18.519: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/framework/framework.go:152
Sep 20 04:25:18.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8384" for this suite.
... skipping 308 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/common/sysctl.go:63
[It] should support unsafe sysctls which are actually whitelisted
  test/e2e/common/sysctl.go:110
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/framework/framework.go:152
... skipping 491 lines ...
Sep 20 04:25:23.578: INFO: Trying to get logs from node e2e-2376e96bca-abe28-minion-group-q94q pod exec-volume-test-gcepd-qqxn container exec-container-gcepd-qqxn: <nil>
STEP: delete the pod
Sep 20 04:25:23.701: INFO: Waiting for pod exec-volume-test-gcepd-qqxn to disappear
Sep 20 04:25:23.738: INFO: Pod exec-volume-test-gcepd-qqxn no longer exists
STEP: Deleting pod exec-volume-test-gcepd-qqxn
Sep 20 04:25:23.738: INFO: Deleting pod "exec-volume-test-gcepd-qqxn" in namespace "volume-7304"
Sep 20 04:25:24.730: INFO: error deleting PD "e2e-2376e96bca-abe28-60d7db9f-4f2f-4744-af2b-772bbdffd74a": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/disks/e2e-2376e96bca-abe28-60d7db9f-4f2f-4744-af2b-772bbdffd74a' is already being used by 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/instances/e2e-2376e96bca-abe28-minion-group-q94q', resourceInUseByAnotherResource
Sep 20 04:25:24.730: INFO: Couldn't delete PD "e2e-2376e96bca-abe28-60d7db9f-4f2f-4744-af2b-772bbdffd74a", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/disks/e2e-2376e96bca-abe28-60d7db9f-4f2f-4744-af2b-772bbdffd74a' is already being used by 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/instances/e2e-2376e96bca-abe28-minion-group-q94q', resourceInUseByAnotherResource
Sep 20 04:25:31.456: INFO: error deleting PD "e2e-2376e96bca-abe28-60d7db9f-4f2f-4744-af2b-772bbdffd74a": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/disks/e2e-2376e96bca-abe28-60d7db9f-4f2f-4744-af2b-772bbdffd74a' is already being used by 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/instances/e2e-2376e96bca-abe28-minion-group-q94q', resourceInUseByAnotherResource
Sep 20 04:25:31.456: INFO: Couldn't delete PD "e2e-2376e96bca-abe28-60d7db9f-4f2f-4744-af2b-772bbdffd74a", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/disks/e2e-2376e96bca-abe28-60d7db9f-4f2f-4744-af2b-772bbdffd74a' is already being used by 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/instances/e2e-2376e96bca-abe28-minion-group-q94q', resourceInUseByAnotherResource
Sep 20 04:25:38.828: INFO: Successfully deleted PD "e2e-2376e96bca-abe28-60d7db9f-4f2f-4744-af2b-772bbdffd74a".
Sep 20 04:25:38.829: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/framework/framework.go:152
Sep 20 04:25:38.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-7304" for this suite.
... skipping 2247 lines ...
Sep 20 04:26:10.430: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep 20 04:26:10.430: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.185.226.224 --kubeconfig=/workspace/.kube/config describe pod redis-master-m8hjh --namespace=kubectl-8994'
Sep 20 04:26:10.813: INFO: stderr: ""
Sep 20 04:26:10.813: INFO: stdout: "Name:         redis-master-m8hjh\nNamespace:    kubectl-8994\nPriority:     0\nNode:         e2e-2376e96bca-abe28-minion-group-q94q/10.40.0.5\nStart Time:   Fri, 20 Sep 2019 04:26:06 +0000\nLabels:       app=redis\n              role=master\nAnnotations:  kubernetes.io/psp: e2e-test-privileged-psp\nStatus:       Running\nIP:           10.64.0.99\nIPs:\n  IP:           10.64.0.99\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://e468ae924414fab1f818c243b3765b2001ede43a3df188c3ebdddf78b84d0928\n    Image:          docker.io/library/redis:5.0.5-alpine\n    Image ID:       docker-pullable://redis@sha256:a606eaca41c3c69c7d2c8a142ec445e71156bae8526ae7970f62b6399e57761c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 20 Sep 2019 04:26:09 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5d5f8 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-5d5f8:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-5d5f8\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                                             Message\n  ----    ------     ----       ----                                             -------\n  Normal  Scheduled  <unknown>  default-scheduler                                Successfully assigned kubectl-8994/redis-master-m8hjh to e2e-2376e96bca-abe28-minion-group-q94q\n  Normal  Pulled     2s         kubelet, e2e-2376e96bca-abe28-minion-group-q94q  Container image \"docker.io/library/redis:5.0.5-alpine\" already present on machine\n  Normal  Created    2s         kubelet, e2e-2376e96bca-abe28-minion-group-q94q  Created container redis-master\n  Normal  Started    1s         kubelet, e2e-2376e96bca-abe28-minion-group-q94q  Started container redis-master\n"
Sep 20 04:26:10.813: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.185.226.224 --kubeconfig=/workspace/.kube/config describe rc redis-master --namespace=kubectl-8994'
Sep 20 04:26:11.201: INFO: stderr: ""
Sep 20 04:26:11.201: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-8994\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        docker.io/library/redis:5.0.5-alpine\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  5s    replication-controller  Created pod: redis-master-m8hjh\n"
Sep 20 04:26:11.201: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.185.226.224 --kubeconfig=/workspace/.kube/config describe service redis-master --namespace=kubectl-8994'
Sep 20 04:26:11.586: INFO: stderr: ""
Sep 20 04:26:11.586: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-8994\nLabels:            app=redis\n                   role=master\nAnnotations:       <none>\nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.0.35.177\nPort:              <unset>  6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.64.0.99:6379\nSession Affinity:  None\nEvents:            <none>\n"
Sep 20 04:26:11.628: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.185.226.224 --kubeconfig=/workspace/.kube/config describe node e2e-2376e96bca-abe28-master'
Sep 20 04:26:12.131: INFO: stderr: ""
Sep 20 04:26:12.131: INFO: stdout: "Name:               e2e-2376e96bca-abe28-master\nRoles:              <none>\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=n1-standard-1\n                    beta.kubernetes.io/os=linux\n                    cloud.google.com/metadata-proxy-ready=true\n                    failure-domain.beta.kubernetes.io/region=us-west1\n                    failure-domain.beta.kubernetes.io/zone=us-west1-b\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=e2e-2376e96bca-abe28-master\n                    kubernetes.io/os=linux\nAnnotations:        node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Fri, 20 Sep 2019 04:20:28 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\n                    node.kubernetes.io/unschedulable:NoSchedule\nUnschedulable:      true\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Fri, 20 Sep 2019 04:20:58 +0000   Fri, 20 Sep 2019 04:20:58 +0000   RouteCreated                 RouteController created a route\n  MemoryPressure       False   Fri, 20 Sep 2019 04:25:19 +0000   Fri, 20 Sep 2019 04:20:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Fri, 20 Sep 2019 04:25:19 +0000   Fri, 20 Sep 2019 04:20:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Fri, 20 Sep 2019 04:25:19 +0000   Fri, 20 Sep 2019 04:20:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Fri, 20 Sep 2019 04:25:19 +0000   Fri, 20 Sep 2019 04:20:28 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   10.40.0.2\n  ExternalIP:   35.185.226.224\n  InternalDNS:  e2e-2376e96bca-abe28-master.c.k8s-boskos-gce-project-13.internal\n  Hostname:     e2e-2376e96bca-abe28-master.c.k8s-boskos-gce-project-13.internal\nCapacity:\n  attachable-volumes-gce-pd:  127\n  cpu:                        1\n  ephemeral-storage:          16293736Ki\n  hugepages-2Mi:              0\n  memory:                     3786208Ki\n  pods:                       110\nAllocatable:\n  attachable-volumes-gce-pd:  127\n  cpu:                        1\n  ephemeral-storage:          15016307073\n  hugepages-2Mi:              0\n  memory:                     3530208Ki\n  pods:                       110\nSystem Info:\n  Machine ID:                 e9c81d59c2cace574732c5688c06261b\n  System UUID:                e9c81d59-c2ca-ce57-4732-c5688c06261b\n  Boot ID:                    f92ba6c5-e23e-4ac8-8f45-400cc97b9e10\n  Kernel Version:             4.19.60+\n  OS Image:                   Container-Optimized OS from Google\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://19.3.1\n  Kubelet Version:            v1.17.0-alpha.0.1598+e9e1a970bbc1c7\n  Kube-Proxy Version:         v1.17.0-alpha.0.1598+e9e1a970bbc1c7\nPodCIDR:                      10.64.2.0/24\nPodCIDRs:                     10.64.2.0/24\nProviderID:                   gce://k8s-boskos-gce-project-13/us-west1-b/e2e-2376e96bca-abe28-master\nNon-terminated Pods:          (10 in total)\n  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---\n  kube-system                 etcd-empty-dir-cleanup-e2e-2376e96bca-abe28-master     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s\n  kube-system                 etcd-server-e2e-2376e96bca-abe28-master                200m (20%)    0 (0%)      0 (0%)           0 (0%)         4m55s\n  kube-system                 etcd-server-events-e2e-2376e96bca-abe28-master         100m (10%)    0 (0%)      0 (0%)           0 (0%)         4m59s\n  kube-system                 fluentd-gcp-v3.2.0-qdf8b                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m44s\n  kube-system                 kube-addon-manager-e2e-2376e96bca-abe28-master         5m (0%)       0 (0%)      50Mi (1%)        0 (0%)         4m50s\n  kube-system                 kube-apiserver-e2e-2376e96bca-abe28-master             250m (25%)    0 (0%)      0 (0%)           0 (0%)         5m27s\n  kube-system                 kube-controller-manager-e2e-2376e96bca-abe28-master    200m (20%)    0 (0%)      0 (0%)           0 (0%)         5m9s\n  kube-system                 kube-scheduler-e2e-2376e96bca-abe28-master             75m (7%)      0 (0%)      0 (0%)           0 (0%)         5m15s\n  kube-system                 l7-lb-controller-v1.2.3-e2e-2376e96bca-abe28-master    10m (1%)      0 (0%)      50Mi (1%)        0 (0%)         4m37s\n  kube-system                 metadata-proxy-v0.1-zz7dq                              32m (3%)      32m (3%)    45Mi (1%)        45Mi (1%)      5m44s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                   Requests    Limits\n  --------                   --------    ------\n  cpu                        872m (87%)  32m (3%)\n  memory                     145Mi (4%)  45Mi (1%)\n  ephemeral-storage          0 (0%)      0 (0%)\n  attachable-volumes-gce-pd  0           0\nEvents:                      <none>\n"
... skipping 506 lines ...
Sep 20 04:26:06.050: INFO: Pod exec-volume-test-gcepd-preprovisionedpv-tk5k no longer exists
STEP: Deleting pod exec-volume-test-gcepd-preprovisionedpv-tk5k
Sep 20 04:26:06.050: INFO: Deleting pod "exec-volume-test-gcepd-preprovisionedpv-tk5k" in namespace "volume-7699"
STEP: Deleting pv and pvc
Sep 20 04:26:06.087: INFO: Deleting PersistentVolumeClaim "pvc-cvxzd"
Sep 20 04:26:06.128: INFO: Deleting PersistentVolume "gcepd-zcq4c"
Sep 20 04:26:07.599: INFO: error deleting PD "e2e-2376e96bca-abe28-1434f69d-9b2f-4128-8f30-1422478f9dbb": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/disks/e2e-2376e96bca-abe28-1434f69d-9b2f-4128-8f30-1422478f9dbb' is already being used by 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/instances/e2e-2376e96bca-abe28-minion-group-q94q', resourceInUseByAnotherResource
Sep 20 04:26:07.599: INFO: Couldn't delete PD "e2e-2376e96bca-abe28-1434f69d-9b2f-4128-8f30-1422478f9dbb", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/disks/e2e-2376e96bca-abe28-1434f69d-9b2f-4128-8f30-1422478f9dbb' is already being used by 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/instances/e2e-2376e96bca-abe28-minion-group-q94q', resourceInUseByAnotherResource
Sep 20 04:26:14.157: INFO: error deleting PD "e2e-2376e96bca-abe28-1434f69d-9b2f-4128-8f30-1422478f9dbb": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/disks/e2e-2376e96bca-abe28-1434f69d-9b2f-4128-8f30-1422478f9dbb' is already being used by 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/instances/e2e-2376e96bca-abe28-minion-group-q94q', resourceInUseByAnotherResource
Sep 20 04:26:14.158: INFO: Couldn't delete PD "e2e-2376e96bca-abe28-1434f69d-9b2f-4128-8f30-1422478f9dbb", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/disks/e2e-2376e96bca-abe28-1434f69d-9b2f-4128-8f30-1422478f9dbb' is already being used by 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/instances/e2e-2376e96bca-abe28-minion-group-q94q', resourceInUseByAnotherResource
Sep 20 04:26:21.628: INFO: Successfully deleted PD "e2e-2376e96bca-abe28-1434f69d-9b2f-4128-8f30-1422478f9dbb".
Sep 20 04:26:21.628: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/framework/framework.go:152
Sep 20 04:26:21.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-7699" for this suite.
... skipping 353 lines ...
Sep 20 04:26:23.186: INFO: In-tree plugin kubernetes.io/local-volume is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/framework/framework.go:152
Sep 20 04:26:23.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-2255" for this suite.
Sep 20 04:26:29.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 20 04:26:29.520: INFO: discovery error for unexpected group: schema.GroupVersion{Group:"stable.example.com", Version:"v2"}
Sep 20 04:26:29.520: INFO: Error discoverying server preferred namespaced resources: unable to retrieve the complete list of server APIs: stable.example.com/v1: the server could not find the requested resource, stable.example.com/v2: the server could not find the requested resource, retrying in 2s.
Sep 20 04:26:32.871: INFO: namespace volume-2255 deletion completed in 9.644065519s


• [SLOW TEST:58.893 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
... skipping 3171 lines ...
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 20 04:27:13.504: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-485
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:698
STEP: Creating projection with secret that has name secret-emptykey-test-0cafb56a-240a-4caf-bdac-1537d6579e59
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:152
Sep 20 04:27:13.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-485" for this suite.
Sep 20 04:27:20.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 20 04:27:23.197: INFO: namespace secrets-485 deletion completed in 9.258345836s


• [SLOW TEST:9.693 seconds]
[sig-api-machinery] Secrets
test/e2e/common/secrets.go:32
  should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:151
... skipping 1394 lines ...
Sep 20 04:27:16.174: INFO: Creating resource for dynamic PV
STEP: creating a StorageClass volume-expand-2154-gcepd-sch2p4v
STEP: creating a claim
Sep 20 04:27:16.214: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Expanding non-expandable pvc
Sep 20 04:27:16.296: INFO: currentPvcSize {{5368709120 0} {<nil>} 5Gi BinarySI}, newSize {{6442450944 0} {<nil>}  BinarySI}
Sep 20 04:27:16.373: INFO: Error updating pvc gcepdklskh with PersistentVolumeClaim "gcepdklskh" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:27:18.455: INFO: Error updating pvc gcepdklskh with PersistentVolumeClaim "gcepdklskh" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:27:20.765: INFO: Error updating pvc gcepdklskh with PersistentVolumeClaim "gcepdklskh" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:27:22.460: INFO: Error updating pvc gcepdklskh with PersistentVolumeClaim "gcepdklskh" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:27:24.541: INFO: Error updating pvc gcepdklskh with PersistentVolumeClaim "gcepdklskh" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:27:26.597: INFO: Error updating pvc gcepdklskh with PersistentVolumeClaim "gcepdklskh" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:27:28.455: INFO: Error updating pvc gcepdklskh with PersistentVolumeClaim "gcepdklskh" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:27:30.449: INFO: Error updating pvc gcepdklskh with PersistentVolumeClaim "gcepdklskh" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:27:32.592: INFO: Error updating pvc gcepdklskh with PersistentVolumeClaim "gcepdklskh" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:27:34.471: INFO: Error updating pvc gcepdklskh with PersistentVolumeClaim "gcepdklskh" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:27:36.455: INFO: Error updating pvc gcepdklskh with PersistentVolumeClaim "gcepdklskh" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:27:38.558: INFO: Error updating pvc gcepdklskh with PersistentVolumeClaim "gcepdklskh" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:27:40.674: INFO: Error updating pvc gcepdklskh with PersistentVolumeClaim "gcepdklskh" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:27:42.449: INFO: Error updating pvc gcepdklskh with PersistentVolumeClaim "gcepdklskh" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:27:44.449: INFO: Error updating pvc gcepdklskh with PersistentVolumeClaim "gcepdklskh" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:27:46.454: INFO: Error updating pvc gcepdklskh with PersistentVolumeClaim "gcepdklskh" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 04:27:46.535: INFO: Error updating pvc gcepdklskh with PersistentVolumeClaim "gcepdklskh" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
STEP: Deleting pvc
Sep 20 04:27:46.535: INFO: Deleting PersistentVolumeClaim "gcepdklskh"
STEP: Deleting sc
Sep 20 04:27:46.618: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  test/e2e/framework/framework.go:152
... skipping 807 lines ...
STEP: cleaning the environment after gcepd
Sep 20 04:27:32.793: INFO: Deleting pod "gcepd-client" in namespace "volume-1627"
Sep 20 04:27:33.069: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
STEP: Deleting pv and pvc
Sep 20 04:27:45.297: INFO: Deleting PersistentVolumeClaim "pvc-l8pfr"
Sep 20 04:27:45.338: INFO: Deleting PersistentVolume "gcepd-bw6q4"
Sep 20 04:27:46.553: INFO: error deleting PD "e2e-2376e96bca-abe28-eead0089-eb96-47e8-93ef-21fe41da07bb": googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/disks/e2e-2376e96bca-abe28-eead0089-eb96-47e8-93ef-21fe41da07bb' is already being used by 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/instances/e2e-2376e96bca-abe28-minion-group-q94q', resourceInUseByAnotherResource
Sep 20 04:27:46.553: INFO: Couldn't delete PD "e2e-2376e96bca-abe28-eead0089-eb96-47e8-93ef-21fe41da07bb", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/disks/e2e-2376e96bca-abe28-eead0089-eb96-47e8-93ef-21fe41da07bb' is already being used by 'projects/k8s-boskos-gce-project-13/zones/us-west1-b/instances/e2e-2376e96bca-abe28-minion-group-q94q', resourceInUseByAnotherResource
Sep 20 04:27:53.938: INFO: Successfully deleted PD "e2e-2376e96bca-abe28-eead0089-eb96-47e8-93ef-21fe41da07bb".
Sep 20 04:27:53.938: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/framework/framework.go:152
Sep 20 04:27:53.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-1627" for this suite.
... skipping 3890 lines ...
Sep 20 04:28:58.198: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  test/e2e/framework/framework.go:698
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:152
Sep 20 04:29:11.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2146" for this suite.
Sep 20 04:29:19.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 245 lines ...
Sep 20 04:29:17.283: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-7539
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:698
STEP: creating the pod
Sep 20 04:29:17.623: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:152
Sep 20 04:29:25.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
Sep 20 04:29:33.029: INFO: namespace init-container-7539 deletion completed in 7.550558613s


• [SLOW TEST:15.746 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:693
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:93
... skipping 340 lines ...
STEP: Deleting the previously created pod
Sep 20 04:29:10.177: INFO: Deleting pod "pvc-volume-tester-f4ndt" in namespace "csi-mock-volumes-1851"
Sep 20 04:29:10.219: INFO: Wait up to 5m0s for pod "pvc-volume-tester-f4ndt" to be fully deleted
STEP: Checking CSI driver logs
Sep 20 04:29:18.346: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-1851","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-0961022e-2b5f-495d-aeb6-f754761a1c6d","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-0961022e-2b5f-495d-aeb6-f754761a1c6d"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-1851","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-1851","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-1851","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-1851","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-0961022e-2b5f-495d-aeb6-f754761a1c6d","storage.kubernetes.io/csiProvisionerIdentity":"1568953718070-8081-csi-mock-csi-mock-volumes-1851"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0961022e-2b5f-495d-aeb6-f754761a1c6d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-0961022e-2b5f-495d-aeb6-f754761a1c6d","storage.kubernetes.io/csiProvisionerIdentity":"1568953718070-8081-csi-mock-csi-mock-volumes-1851"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0961022e-2b5f-495d-aeb6-f754761a1c6d/globalmount","target_path":"/var/lib/kubelet/pods/30d3523f-5e66-4a28-bfc9-c5a00ca084ce/volumes/kubernetes.io~csi/pvc-0961022e-2b5f-495d-aeb6-f754761a1c6d/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-0961022e-2b5f-495d-aeb6-f754761a1c6d","storage.kubernetes.io/csiProvisionerIdentity":"1568953718070-8081-csi-mock-csi-mock-volumes-1851"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/30d3523f-5e66-4a28-bfc9-c5a00ca084ce/volumes/kubernetes.io~csi/pvc-0961022e-2b5f-495d-aeb6-f754761a1c6d/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0961022e-2b5f-495d-aeb6-f754761a1c6d/globalmount"},"Response":{},"Error":""}

Sep 20 04:29:18.346: INFO: Found NodeUnpublishVolume: {Method:/csi.v1.Node/NodeUnpublishVolume Request:{VolumeContext:map[]}}
STEP: Deleting pod pvc-volume-tester-f4ndt
Sep 20 04:29:18.346: INFO: Deleting pod "pvc-volume-tester-f4ndt" in namespace "csi-mock-volumes-1851"
STEP: Deleting claim pvc-2wkc7
Sep 20 04:29:18.461: INFO: Waiting up to 2m0s for PersistentVolume pvc-0961022e-2b5f-495d-aeb6-f754761a1c6d to get deleted
... skipping 323 lines ...
STEP: creating execpod-noendpoints on node e2e-2376e96bca-abe28-minion-group-990f
Sep 20 04:29:29.247: INFO: Creating new exec pod
Sep 20 04:29:41.473: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node e2e-2376e96bca-abe28-minion-group-990f
Sep 20 04:29:41.473: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.185.226.224 --kubeconfig=/workspace/.kube/config exec --namespace=services-4128 execpod-noendpointstz22r -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Sep 20 04:29:44.194: INFO: rc: 1
Sep 20 04:29:44.194: INFO: error contained 'REFUSED', as expected: error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.185.226.224 --kubeconfig=/workspace/.kube/config exec --namespace=services-4128 execpod-noendpointstz22r -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80] []  <nil>  + /agnhost connect --timeout=3s no-pods:80
REFUSED
command terminated with exit code 1
 [] <nil> 0xc001db3020 exit status 1 <nil> <nil> true [0xc001810530 0xc001810548 0xc001810560] [0xc001810530 0xc001810548 0xc001810560] [0xc001810540 0xc001810558] [0x10efcb0 0x10efcb0] 0xc0016d2f00 <nil>}:
Command stdout:

stderr:
+ /agnhost connect --timeout=3s no-pods:80
REFUSED
command terminated with exit code 1

error:
exit status 1
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:152
Sep 20 04:29:44.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4128" for this suite.
Sep 20 04:29:50.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 1338 lines ...
Sep 20 04:29:29.275: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:29.347: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:29.564: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:29.750: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:29.829: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:29.889: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:30.000: INFO: Lookups using dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7811.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7811.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local jessie_udp@dns-test-service-2.dns-7811.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7811.svc.cluster.local]

Sep 20 04:29:35.056: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:35.101: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:35.147: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:35.194: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:35.324: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:35.372: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:35.415: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:35.459: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:35.558: INFO: Lookups using dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7811.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7811.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local jessie_udp@dns-test-service-2.dns-7811.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7811.svc.cluster.local]

Sep 20 04:29:40.047: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:40.088: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:40.128: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:40.172: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:40.327: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:40.368: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:40.426: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:40.467: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:40.548: INFO: Lookups using dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7811.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7811.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local jessie_udp@dns-test-service-2.dns-7811.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7811.svc.cluster.local]

Sep 20 04:29:45.043: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:45.083: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:45.123: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:45.165: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:45.341: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:45.388: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:45.432: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:45.475: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:45.557: INFO: Lookups using dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7811.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7811.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local jessie_udp@dns-test-service-2.dns-7811.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7811.svc.cluster.local]

Sep 20 04:29:50.040: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:50.078: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:50.120: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:50.160: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:50.280: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:50.319: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:50.358: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:50.406: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:50.493: INFO: Lookups using dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7811.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7811.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local jessie_udp@dns-test-service-2.dns-7811.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7811.svc.cluster.local]

Sep 20 04:29:55.044: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:55.085: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:55.125: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:55.164: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:55.344: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:55.385: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:55.424: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:55.463: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7811.svc.cluster.local from pod dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a: the server could not find the requested resource (get pods dns-test-e9e542ea-8694-4067-905b-d65a59a1173a)
Sep 20 04:29:55.590: INFO: Lookups using dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7811.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7811.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7811.svc.cluster.local jessie_udp@dns-test-service-2.dns-7811.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7811.svc.cluster.local]

Sep 20 04:30:00.552: INFO: DNS probes using dns-7811/dns-test-e9e542ea-8694-4067-905b-d65a59a1173a succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 352 lines ...
STEP: Creating the service on top of the pods in kubernetes
Sep 20 04:29:22.566: INFO: Service node-port-service in namespace nettest-3363 found.
Sep 20 04:29:22.714: INFO: Service session-affinity-service in namespace nettest-3363 found.
STEP: dialing(udp) 34.82.128.63 (node) --> 10.0.11.181:90 (config.clusterIP)
Sep 20 04:29:22.798: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.0.11.181 90 | grep -v '^\s*$'] Namespace:nettest-3363 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:29:22.798: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:29:24.334: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.0.11.181 90 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 20 04:29:24.334: INFO: Waiting for [netserver-0 netserver-1 netserver-2] endpoints (expected=[netserver-0 netserver-1 netserver-2], actual=[])
Sep 20 04:29:26.378: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.0.11.181 90 | grep -v '^\s*$'] Namespace:nettest-3363 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:29:26.378: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:29:27.953: INFO: Waiting for [netserver-1 netserver-2] endpoints (expected=[netserver-0 netserver-1 netserver-2], actual=[netserver-0])
Sep 20 04:29:29.997: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.0.11.181 90 | grep -v '^\s*$'] Namespace:nettest-3363 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:29:29.998: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 556 lines ...
Sep 20 04:29:08.112: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-7408
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:698
STEP: creating the pod
Sep 20 04:29:08.422: INFO: PodSpec: initContainers in spec.initContainers
Sep 20 04:30:08.639: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d1290020-9c78-4212-ba54-9e66d8df8ca0", GenerateName:"", Namespace:"init-container-7408", SelfLink:"/api/v1/namespaces/init-container-7408/pods/pod-init-d1290020-9c78-4212-ba54-9e66d8df8ca0", UID:"3df68792-d83f-4d9c-a743-c7cee2440179", ResourceVersion:"16317", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63704550548, loc:(*time.Location)(0x846e1e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"422367877"}, Annotations:map[string]string{"kubernetes.io/psp":"e2e-test-privileged-psp"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-s65hh", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002ee8000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-s65hh", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-s65hh", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-s65hh", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002726088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"e2e-2376e96bca-abe28-minion-group-990f", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002f0a000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002726100)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002726120)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002726128), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00272612c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550548, loc:(*time.Location)(0x846e1e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550548, loc:(*time.Location)(0x846e1e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550548, loc:(*time.Location)(0x846e1e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550548, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.40.0.4", PodIP:"10.64.1.122", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.64.1.122"}}, StartTime:(*v1.Time)(0xc002f1e060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000c30150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000c301c0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"docker://90a1f149da19e7b715404a31eca605324efabdd4d7cf954f036fedb998116a38", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002f1e0a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002f1e080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0027261af)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:152
Sep 20 04:30:08.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7408" for this suite.
Sep 20 04:30:22.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 20 04:30:24.053: INFO: namespace init-container-7408 deletion completed in 15.366613137s


• [SLOW TEST:75.942 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:693
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[BeforeEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:151
... skipping 3184 lines ...
STEP: Building a namespace api object, basename node-problem-detector
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in node-problem-detector-1807
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters]
  test/e2e/node/node_problem_detector.go:49
Sep 20 04:27:20.259: INFO: Waiting up to 1m0s for all nodes to be ready
[It] should run without error
  test/e2e/node/node_problem_detector.go:57
STEP: Getting all nodes and their SSH-able IP addresses
STEP: Check node "34.82.128.63:22" has node-problem-detector process
STEP: Check node-problem-detector is running fine on node "34.82.128.63:22"
STEP: Inject log to trigger AUFSUmountHung on node "34.82.128.63:22"
STEP: Check node "34.83.230.168:22" has node-problem-detector process
... skipping 25 lines ...
Sep 20 04:31:09.701: INFO: namespace node-problem-detector-1807 deletion completed in 7.413957704s


• [SLOW TEST:230.097 seconds]
[k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters]
test/e2e/framework/framework.go:693
  should run without error
  test/e2e/node/node_problem_detector.go:57
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:93
Sep 20 04:31:09.704: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 2302 lines ...
Sep 20 04:31:36.350: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [sig-storage] PersistentVolumes:vsphere
  test/e2e/framework/framework.go:152
Sep 20 04:31:36.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pv-6942" for this suite.
Sep 20 04:31:42.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 20 04:31:42.724: INFO: discovery error for unexpected group: schema.GroupVersion{Group:"webhook.example.com", Version:"v1"}
Sep 20 04:31:42.724: INFO: Error discoverying server preferred namespaced resources: unable to retrieve the complete list of server APIs: webhook.example.com/v1: the server could not find the requested resource, webhook.example.com/v2: the server could not find the requested resource, retrying in 2s.
Sep 20 04:31:47.207: INFO: namespace pv-6942 deletion completed in 10.815754456s
[AfterEach] [sig-storage] PersistentVolumes:vsphere
  test/e2e/storage/vsphere/persistent_volumes-vsphere.go:115
Sep 20 04:31:47.207: INFO: AfterEach: Cleaning up test resources


... skipping 332 lines ...
Sep 20 04:30:32.359: INFO: PersistentVolumeClaim pvc-94796 found but phase is Pending instead of Bound.
Sep 20 04:30:34.396: INFO: PersistentVolumeClaim pvc-94796 found but phase is Pending instead of Bound.
Sep 20 04:30:36.438: INFO: PersistentVolumeClaim pvc-94796 found but phase is Pending instead of Bound.
Sep 20 04:30:38.476: INFO: PersistentVolumeClaim pvc-94796 found but phase is Pending instead of Bound.
Sep 20 04:30:40.513: INFO: PersistentVolumeClaim pvc-94796 found and phase=Bound (10.241105398s)
STEP: checking for CSIInlineVolumes feature
Sep 20 04:31:02.843: INFO: Error getting logs for pod csi-inline-volume-987lf: the server could not find the requested resource (get pods csi-inline-volume-987lf)
STEP: Deleting pod csi-inline-volume-987lf in namespace csi-mock-volumes-5309
STEP: Deleting the previously created pod
Sep 20 04:31:11.002: INFO: Deleting pod "pvc-volume-tester-cnskh" in namespace "csi-mock-volumes-5309"
Sep 20 04:31:11.047: INFO: Wait up to 5m0s for pod "pvc-volume-tester-cnskh" to be fully deleted
STEP: Checking CSI driver logs
Sep 20 04:31:17.485: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5309","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-e73cc99b-4fe4-48fe-b93b-69b470d20fae","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-e73cc99b-4fe4-48fe-b93b-69b470d20fae"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5309","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5309","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-5309","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-5309","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-e73cc99b-4fe4-48fe-b93b-69b470d20fae","storage.kubernetes.io/csiProvisionerIdentity":"1568953840121-8081-csi-mock-csi-mock-volumes-5309"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e73cc99b-4fe4-48fe-b93b-69b470d20fae/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-e73cc99b-4fe4-48fe-b93b-69b470d20fae","storage.kubernetes.io/csiProvisionerIdentity":"1568953840121-8081-csi-mock-csi-mock-volumes-5309"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e73cc99b-4fe4-48fe-b93b-69b470d20fae/globalmount","target_path":"/var/lib/kubelet/pods/ebe95454-496b-4127-85d5-bb2cf5dbe70c/volumes/kubernetes.io~csi/pvc-e73cc99b-4fe4-48fe-b93b-69b470d20fae/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"pvc-volume-tester-cnskh","csi.storage.k8s.io/pod.namespace":"csi-mock-volumes-5309","csi.storage.k8s.io/pod.uid":"ebe95454-496b-4127-85d5-bb2cf5dbe70c","csi.storage.k8s.io/serviceAccount.name":"default","name":"pvc-e73cc99b-4fe4-48fe-b93b-69b470d20fae","storage.kubernetes.io/csiProvisionerIdentity":"1568953840121-8081-csi-mock-csi-mock-volumes-5309"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/ebe95454-496b-4127-85d5-bb2cf5dbe70c/volumes/kubernetes.io~csi/pvc-e73cc99b-4fe4-48fe-b93b-69b470d20fae/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e73cc99b-4fe4-48fe-b93b-69b470d20fae/globalmount"},"Response":{},"Error":""}

Sep 20 04:31:17.485: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Sep 20 04:31:17.485: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-cnskh
Sep 20 04:31:17.485: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-5309
Sep 20 04:31:17.485: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: ebe95454-496b-4127-85d5-bb2cf5dbe70c
Sep 20 04:31:17.485: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
... skipping 138 lines ...
Sep 20 04:31:36.571: INFO: Creating a PV followed by a PVC
Sep 20 04:31:36.644: INFO: Waiting for PV local-pvvnsgk to bind to PVC pvc-rf6pm
Sep 20 04:31:36.644: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-rf6pm] to have phase Bound
Sep 20 04:31:36.679: INFO: PersistentVolumeClaim pvc-rf6pm found and phase=Bound (35.719653ms)
Sep 20 04:31:36.679: INFO: Waiting up to 3m0s for PersistentVolume local-pvvnsgk to have phase Bound
Sep 20 04:31:36.714: INFO: PersistentVolume local-pvvnsgk found and phase=Bound (34.125822ms)
[It] should fail scheduling due to different NodeAffinity
  test/e2e/storage/persistent_volumes-local.go:365
STEP: local-volume-type: dir
STEP: Initializing test volumes
Sep 20 04:31:36.788: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.185.226.224 --kubeconfig=/workspace/.kube/config exec --namespace=persistent-local-volumes-test-2834 hostexec-e2e-2376e96bca-abe28-minion-group-990f -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c11f4a66-1525-4754-914f-7a2c5f0dd614'
Sep 20 04:31:38.566: INFO: stderr: ""
Sep 20 04:31:38.566: INFO: stdout: ""
... skipping 26 lines ...

• [SLOW TEST:23.166 seconds]
[sig-storage] PersistentVolumes-local 
test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  test/e2e/storage/persistent_volumes-local.go:343
    should fail scheduling due to different NodeAffinity
    test/e2e/storage/persistent_volumes-local.go:365
------------------------------
S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  test/e2e/storage/testsuites/base.go:93
... skipping 152 lines ...
Sep 20 04:31:21.195: INFO: Unable to read jessie_udp@dns-test-service.dns-5451 from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:21.230: INFO: Unable to read jessie_tcp@dns-test-service.dns-5451 from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:21.267: INFO: Unable to read jessie_udp@dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:21.304: INFO: Unable to read jessie_tcp@dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:21.343: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:21.387: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:21.612: INFO: Lookups using dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5451 wheezy_tcp@dns-test-service.dns-5451 wheezy_udp@dns-test-service.dns-5451.svc wheezy_tcp@dns-test-service.dns-5451.svc wheezy_udp@_http._tcp.dns-test-service.dns-5451.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5451.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5451 jessie_tcp@dns-test-service.dns-5451 jessie_udp@dns-test-service.dns-5451.svc jessie_tcp@dns-test-service.dns-5451.svc jessie_udp@_http._tcp.dns-test-service.dns-5451.svc jessie_tcp@_http._tcp.dns-test-service.dns-5451.svc]

Sep 20 04:31:26.767: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:26.938: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:27.095: INFO: Unable to read wheezy_udp@dns-test-service.dns-5451 from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:27.250: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5451 from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:27.434: INFO: Unable to read wheezy_udp@dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
... skipping 5 lines ...
Sep 20 04:31:28.323: INFO: Unable to read jessie_udp@dns-test-service.dns-5451 from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:28.368: INFO: Unable to read jessie_tcp@dns-test-service.dns-5451 from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:28.418: INFO: Unable to read jessie_udp@dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:28.463: INFO: Unable to read jessie_tcp@dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:28.506: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:28.552: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:28.827: INFO: Lookups using dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5451 wheezy_tcp@dns-test-service.dns-5451 wheezy_udp@dns-test-service.dns-5451.svc wheezy_tcp@dns-test-service.dns-5451.svc wheezy_udp@_http._tcp.dns-test-service.dns-5451.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5451.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5451 jessie_tcp@dns-test-service.dns-5451 jessie_udp@dns-test-service.dns-5451.svc jessie_tcp@dns-test-service.dns-5451.svc jessie_udp@_http._tcp.dns-test-service.dns-5451.svc jessie_tcp@_http._tcp.dns-test-service.dns-5451.svc]

Sep 20 04:31:31.660: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:31.698: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:31.734: INFO: Unable to read wheezy_udp@dns-test-service.dns-5451 from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:31.771: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5451 from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:31.811: INFO: Unable to read wheezy_udp@dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
... skipping 5 lines ...
Sep 20 04:31:32.245: INFO: Unable to read jessie_udp@dns-test-service.dns-5451 from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:32.280: INFO: Unable to read jessie_tcp@dns-test-service.dns-5451 from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:32.326: INFO: Unable to read jessie_udp@dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:32.363: INFO: Unable to read jessie_tcp@dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:32.407: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:32.445: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:32.762: INFO: Lookups using dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5451 wheezy_tcp@dns-test-service.dns-5451 wheezy_udp@dns-test-service.dns-5451.svc wheezy_tcp@dns-test-service.dns-5451.svc wheezy_udp@_http._tcp.dns-test-service.dns-5451.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5451.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5451 jessie_tcp@dns-test-service.dns-5451 jessie_udp@dns-test-service.dns-5451.svc jessie_tcp@dns-test-service.dns-5451.svc jessie_udp@_http._tcp.dns-test-service.dns-5451.svc jessie_tcp@_http._tcp.dns-test-service.dns-5451.svc]

Sep 20 04:31:36.656: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:36.699: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:36.735: INFO: Unable to read wheezy_udp@dns-test-service.dns-5451 from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:36.771: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5451 from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:36.812: INFO: Unable to read wheezy_udp@dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
... skipping 5 lines ...
Sep 20 04:31:37.339: INFO: Unable to read jessie_udp@dns-test-service.dns-5451 from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:37.375: INFO: Unable to read jessie_tcp@dns-test-service.dns-5451 from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:37.425: INFO: Unable to read jessie_udp@dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:37.485: INFO: Unable to read jessie_tcp@dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:37.524: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:37.562: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:37.796: INFO: Lookups using dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5451 wheezy_tcp@dns-test-service.dns-5451 wheezy_udp@dns-test-service.dns-5451.svc wheezy_tcp@dns-test-service.dns-5451.svc wheezy_udp@_http._tcp.dns-test-service.dns-5451.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5451.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5451 jessie_tcp@dns-test-service.dns-5451 jessie_udp@dns-test-service.dns-5451.svc jessie_tcp@dns-test-service.dns-5451.svc jessie_udp@_http._tcp.dns-test-service.dns-5451.svc jessie_tcp@_http._tcp.dns-test-service.dns-5451.svc]

Sep 20 04:31:41.653: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:41.700: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:41.751: INFO: Unable to read wheezy_udp@dns-test-service.dns-5451 from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:41.792: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5451 from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:41.841: INFO: Unable to read wheezy_udp@dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
... skipping 5 lines ...
Sep 20 04:31:42.578: INFO: Unable to read jessie_udp@dns-test-service.dns-5451 from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:42.676: INFO: Unable to read jessie_tcp@dns-test-service.dns-5451 from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:42.737: INFO: Unable to read jessie_udp@dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:42.782: INFO: Unable to read jessie_tcp@dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:42.823: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:42.866: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5451.svc from pod dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6: the server could not find the requested resource (get pods dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6)
Sep 20 04:31:43.139: INFO: Lookups using dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5451 wheezy_tcp@dns-test-service.dns-5451 wheezy_udp@dns-test-service.dns-5451.svc wheezy_tcp@dns-test-service.dns-5451.svc wheezy_udp@_http._tcp.dns-test-service.dns-5451.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5451.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5451 jessie_tcp@dns-test-service.dns-5451 jessie_udp@dns-test-service.dns-5451.svc jessie_tcp@dns-test-service.dns-5451.svc jessie_udp@_http._tcp.dns-test-service.dns-5451.svc jessie_tcp@_http._tcp.dns-test-service.dns-5451.svc]

Sep 20 04:31:49.420: INFO: DNS probes using dns-5451/dns-test-7c589251-7d5f-40f8-a91a-f2f92e7a99d6 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 282 lines ...
      test/e2e/storage/testsuites/subpath.go:361

      Driver local doesn't support InlineVolume -- skipping

      test/e2e/storage/testsuites/base.go:146
------------------------------
SSS{"component":"entrypoint","file":"prow/entrypoint/run.go:163","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","time":"2019-09-20T04:32:05Z"}
Traceback (most recent call last):
  File "../test-infra/scenarios/kubernetes_e2e.py", line 778, in <module>
    main(parse_args())
  File "../test-infra/scenarios/kubernetes_e2e.py", line 626, in main
    mode.start(runner_args)
  File "../test-infra/scenarios/kubernetes_e2e.py", line 262, in start
... skipping 29 lines ...
Sep 20 04:31:51.754: INFO: Got stdout from 35.227.158.60:22: Hello from prow@e2e-2376e96bca-abe28-minion-group-q94q
STEP: SSH'ing to 1 nodes and running echo "foo" | grep "bar"
STEP: SSH'ing to 1 nodes and running echo "stdout" && echo "stderr" >&2 && exit 7
Sep 20 04:31:52.808: INFO: Got stdout from 34.82.128.63:22: stdout
Sep 20 04:31:52.808: INFO: Got stderr from 34.82.128.63:22: stderr
STEP: SSH'ing to a nonexistent host
error dialing prow@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying
[AfterEach] [k8s.io] [sig-node] SSH
  test/e2e/framework/framework.go:152
Sep 20 04:31:57.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ssh-5282" for this suite.
Sep 20 04:32:03.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 20 04:32:06.074: INFO: namespace ssh-5282 deletion completed in 8.205817718s
... skipping 346 lines ...