This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2019-09-20 02:40
Elapsed29m8s
Revision9bebce9edc4244cba9dfbd96d73b8138809173e5
Refs 82703

No Test Failures!


Error lines from build-log.txt

... skipping 142 lines ...
INFO: 5212 processes: 4956 remote cache hit, 29 processwrapper-sandbox, 227 remote.
INFO: Build completed successfully, 5305 total actions
INFO: Build completed successfully, 5305 total actions
make: Leaving directory '/home/prow/go/src/k8s.io/kubernetes'
2019/09/20 02:48:20 process.go:155: Step 'make -C /home/prow/go/src/k8s.io/kubernetes bazel-release' finished in 8m13.565088602s
2019/09/20 02:48:20 util.go:255: Flushing memory.
2019/09/20 02:48:51 util.go:265: flushMem error (page cache): exit status 1
2019/09/20 02:48:51 process.go:153: Running: /home/prow/go/src/k8s.io/release/push-build.sh --nomock --verbose --noupdatelatest --bucket=kubernetes-release-pull --ci --gcs-suffix=/pull-kubernetes-e2e-gce --allow-dup
push-build.sh: BEGIN main on a0350432-db4f-11e9-85fa-522193c84e76 Fri Sep 20 02:48:51 UTC 2019

$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
Starting local Bazel server and connecting to it...
INFO: Invocation ID: 281dbcdd-6022-480b-b430-254e78eeadf3
... skipping 866 lines ...
Trying to find master named 'e2e-b7cf44e8f4-abe28-master'
Looking for address 'e2e-b7cf44e8f4-abe28-master-ip'
Using master: e2e-b7cf44e8f4-abe28-master (external IP: 104.198.98.163; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

............Kubernetes cluster created.
Cluster "k8s-jkns-e2e-gce-ubuntu_e2e-b7cf44e8f4-abe28" set.
User "k8s-jkns-e2e-gce-ubuntu_e2e-b7cf44e8f4-abe28" set.
Context "k8s-jkns-e2e-gce-ubuntu_e2e-b7cf44e8f4-abe28" created.
Switched to context "k8s-jkns-e2e-gce-ubuntu_e2e-b7cf44e8f4-abe28".
... skipping 1613 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/common/sysctl.go:63
[It] should support sysctls
  test/e2e/common/sysctl.go:67
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/framework/framework.go:152
... skipping 2397 lines ...
STEP: Scaling down replication controller to zero
STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-7002 to 0
STEP: Update service to not tolerate unready services
STEP: Check if pod is unreachable
Sep 20 03:02:24.202: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.98.163 --kubeconfig=/workspace/.kube/config exec --namespace=services-7002 execpod-5b2r2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7002.svc.cluster.local:80/; test "$?" -ne "0"'
Sep 20 03:02:25.552: INFO: rc: 1
Sep 20 03:02:25.552: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.98.163 --kubeconfig=/workspace/.kube/config exec --namespace=services-7002 execpod-5b2r2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7002.svc.cluster.local:80/; test "$?" -ne "0"] []  <nil> NOW: 2019-09-20 03:02:25.304549995 +0000 UTC m=+16.661578060 + curl -q -s --connect-timeout 2 http://tolerate-unready.services-7002.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1
 [] <nil> 0xc0009e98c0 exit status 1 <nil> <nil> true [0xc001e36b48 0xc001e36b80 0xc001e36bb0] [0xc001e36b48 0xc001e36b80 0xc001e36bb0] [0xc001e36b68 0xc001e36ba8] [0x10efcb0 0x10efcb0] 0xc0024a3260 <nil>}:
Command stdout:
NOW: 2019-09-20 03:02:25.304549995 +0000 UTC m=+16.661578060
stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7002.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1

error:
exit status 1
Sep 20 03:02:27.552: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.98.163 --kubeconfig=/workspace/.kube/config exec --namespace=services-7002 execpod-5b2r2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7002.svc.cluster.local:80/; test "$?" -ne "0"'
Sep 20 03:02:30.671: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7002.svc.cluster.local:80/\n+ test 7 -ne 0\n"
Sep 20 03:02:30.671: INFO: stdout: ""
STEP: Update service to tolerate unready services again
STEP: Check if terminating pod is available through service
Sep 20 03:02:30.753: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.98.163 --kubeconfig=/workspace/.kube/config exec --namespace=services-7002 execpod-5b2r2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7002.svc.cluster.local:80/'
Sep 20 03:02:34.220: INFO: rc: 7
Sep 20 03:02:34.220: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.98.163 --kubeconfig=/workspace/.kube/config exec --namespace=services-7002 execpod-5b2r2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7002.svc.cluster.local:80/] []  <nil>  + curl -q -s --connect-timeout 2 http://tolerate-unready.services-7002.svc.cluster.local:80/
command terminated with exit code 7
 [] <nil> 0xc0021c89c0 exit status 7 <nil> <nil> true [0xc00127e2e8 0xc00127e460 0xc00127e590] [0xc00127e2e8 0xc00127e460 0xc00127e590] [0xc00127e458 0xc00127e508] [0x10efcb0 0x10efcb0] 0xc0021d0c00 <nil>}:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7002.svc.cluster.local:80/
command terminated with exit code 7

error:
exit status 7
Sep 20 03:02:36.220: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.98.163 --kubeconfig=/workspace/.kube/config exec --namespace=services-7002 execpod-5b2r2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7002.svc.cluster.local:80/'
Sep 20 03:02:37.669: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7002.svc.cluster.local:80/\n"
Sep 20 03:02:37.669: INFO: stdout: "NOW: 2019-09-20 03:02:37.530531956 +0000 UTC m=+28.887560009"
STEP: Remove pods immediately
STEP: stopping RC slow-terminating-unready-pod in namespace services-7002
... skipping 330 lines ...
Sep 20 03:01:58.774: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-bvvhr] to have phase Bound
Sep 20 03:01:58.926: INFO: PersistentVolumeClaim pvc-bvvhr found but phase is Pending instead of Bound.
Sep 20 03:02:00.964: INFO: PersistentVolumeClaim pvc-bvvhr found and phase=Bound (2.189908958s)
Sep 20 03:02:00.964: INFO: Waiting up to 3m0s for PersistentVolume gce-dhvb7 to have phase Bound
Sep 20 03:02:01.001: INFO: PersistentVolume gce-dhvb7 found and phase=Bound (37.854166ms)
STEP: Creating the Client Pod
[It] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach
  test/e2e/storage/persistent_volumes-gce.go:139
STEP: Deleting the Persistent Volume
Sep 20 03:02:19.236: INFO: Deleting PersistentVolume "gce-dhvb7"
STEP: Deleting the client pod
Sep 20 03:02:19.526: INFO: Deleting pod "pvc-tester-65bck" in namespace "pv-994"
Sep 20 03:02:19.566: INFO: Wait up to 5m0s for pod "pvc-tester-65bck" to be fully deleted
... skipping 16 lines ...
Sep 20 03:03:00.239: INFO: Successfully deleted PD "e2e-b7cf44e8f4-abe28-2c9099a5-80a5-4825-ab25-5c3c0f584c34".


• [SLOW TEST:64.789 seconds]
[sig-storage] PersistentVolumes GCEPD
test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach
  test/e2e/storage/persistent_volumes-gce.go:139
------------------------------
S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:93
... skipping 259 lines ...
Sep 20 03:01:56.045: INFO: PersistentVolumeClaim csi-hostpathwrdn7 found but phase is Pending instead of Bound.
Sep 20 03:01:58.189: INFO: PersistentVolumeClaim csi-hostpathwrdn7 found but phase is Pending instead of Bound.
Sep 20 03:02:00.257: INFO: PersistentVolumeClaim csi-hostpathwrdn7 found but phase is Pending instead of Bound.
Sep 20 03:02:02.300: INFO: PersistentVolumeClaim csi-hostpathwrdn7 found and phase=Bound (24.760232486s)
STEP: Expanding non-expandable pvc
Sep 20 03:02:02.386: INFO: currentPvcSize {{5368709120 0} {<nil>} 5Gi BinarySI}, newSize {{6442450944 0} {<nil>}  BinarySI}
Sep 20 03:02:02.463: INFO: Error updating pvc csi-hostpathwrdn7 with persistentvolumeclaims "csi-hostpathwrdn7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:02:04.542: INFO: Error updating pvc csi-hostpathwrdn7 with persistentvolumeclaims "csi-hostpathwrdn7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:02:06.542: INFO: Error updating pvc csi-hostpathwrdn7 with persistentvolumeclaims "csi-hostpathwrdn7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:02:08.545: INFO: Error updating pvc csi-hostpathwrdn7 with persistentvolumeclaims "csi-hostpathwrdn7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:02:10.545: INFO: Error updating pvc csi-hostpathwrdn7 with persistentvolumeclaims "csi-hostpathwrdn7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:02:12.539: INFO: Error updating pvc csi-hostpathwrdn7 with persistentvolumeclaims "csi-hostpathwrdn7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:02:14.543: INFO: Error updating pvc csi-hostpathwrdn7 with persistentvolumeclaims "csi-hostpathwrdn7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:02:16.554: INFO: Error updating pvc csi-hostpathwrdn7 with persistentvolumeclaims "csi-hostpathwrdn7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:02:18.540: INFO: Error updating pvc csi-hostpathwrdn7 with persistentvolumeclaims "csi-hostpathwrdn7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:02:20.562: INFO: Error updating pvc csi-hostpathwrdn7 with persistentvolumeclaims "csi-hostpathwrdn7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:02:22.557: INFO: Error updating pvc csi-hostpathwrdn7 with persistentvolumeclaims "csi-hostpathwrdn7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:02:24.670: INFO: Error updating pvc csi-hostpathwrdn7 with persistentvolumeclaims "csi-hostpathwrdn7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:02:26.542: INFO: Error updating pvc csi-hostpathwrdn7 with persistentvolumeclaims "csi-hostpathwrdn7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:02:28.555: INFO: Error updating pvc csi-hostpathwrdn7 with persistentvolumeclaims "csi-hostpathwrdn7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:02:30.542: INFO: Error updating pvc csi-hostpathwrdn7 with persistentvolumeclaims "csi-hostpathwrdn7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:02:32.540: INFO: Error updating pvc csi-hostpathwrdn7 with persistentvolumeclaims "csi-hostpathwrdn7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:02:32.634: INFO: Error updating pvc csi-hostpathwrdn7 with persistentvolumeclaims "csi-hostpathwrdn7" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Sep 20 03:02:32.634: INFO: Deleting PersistentVolumeClaim "csi-hostpathwrdn7"
Sep 20 03:02:32.691: INFO: Waiting up to 5m0s for PersistentVolume pvc-ec5f0527-66c9-4701-9a4c-dbb1b2ca0497 to get deleted
Sep 20 03:02:32.743: INFO: PersistentVolume pvc-ec5f0527-66c9-4701-9a4c-dbb1b2ca0497 found and phase=Released (51.811887ms)
Sep 20 03:02:37.817: INFO: PersistentVolume pvc-ec5f0527-66c9-4701-9a4c-dbb1b2ca0497 was removed
STEP: Deleting sc
... skipping 1175 lines ...
Sep 20 03:03:11.047: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5105.svc.cluster.local from pod dns-5105/dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d: the server could not find the requested resource (get pods dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d)
Sep 20 03:03:11.097: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5105.svc.cluster.local from pod dns-5105/dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d: the server could not find the requested resource (get pods dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d)
Sep 20 03:03:11.715: INFO: Unable to read jessie_udp@dns-test-service.dns-5105.svc.cluster.local from pod dns-5105/dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d: the server could not find the requested resource (get pods dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d)
Sep 20 03:03:11.851: INFO: Unable to read jessie_tcp@dns-test-service.dns-5105.svc.cluster.local from pod dns-5105/dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d: the server could not find the requested resource (get pods dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d)
Sep 20 03:03:11.927: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5105.svc.cluster.local from pod dns-5105/dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d: the server could not find the requested resource (get pods dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d)
Sep 20 03:03:12.015: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5105.svc.cluster.local from pod dns-5105/dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d: the server could not find the requested resource (get pods dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d)
Sep 20 03:03:12.461: INFO: Lookups using dns-5105/dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d failed for: [wheezy_udp@dns-test-service.dns-5105.svc.cluster.local wheezy_tcp@dns-test-service.dns-5105.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5105.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5105.svc.cluster.local jessie_udp@dns-test-service.dns-5105.svc.cluster.local jessie_tcp@dns-test-service.dns-5105.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5105.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5105.svc.cluster.local]

Sep 20 03:03:17.600: INFO: Unable to read wheezy_udp@dns-test-service.dns-5105.svc.cluster.local from pod dns-5105/dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d: the server could not find the requested resource (get pods dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d)
Sep 20 03:03:17.825: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5105.svc.cluster.local from pod dns-5105/dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d: the server could not find the requested resource (get pods dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d)
Sep 20 03:03:18.306: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5105.svc.cluster.local from pod dns-5105/dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d: the server could not find the requested resource (get pods dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d)
Sep 20 03:03:18.548: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5105.svc.cluster.local from pod dns-5105/dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d: the server could not find the requested resource (get pods dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d)
Sep 20 03:03:19.213: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5105.svc.cluster.local from pod dns-5105/dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d: the server could not find the requested resource (get pods dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d)
Sep 20 03:03:19.260: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5105.svc.cluster.local from pod dns-5105/dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d: the server could not find the requested resource (get pods dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d)
Sep 20 03:03:19.540: INFO: Lookups using dns-5105/dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d failed for: [wheezy_udp@dns-test-service.dns-5105.svc.cluster.local wheezy_tcp@dns-test-service.dns-5105.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5105.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5105.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5105.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5105.svc.cluster.local]

Sep 20 03:03:24.087: INFO: DNS probes using dns-5105/dns-test-90c56075-8a08-4546-9245-58c4dbe29b2d succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 1957 lines ...
STEP: cleaning the environment after gcepd
Sep 20 03:03:32.424: INFO: Deleting pod "gcepd-client" in namespace "volume-4010"
Sep 20 03:03:32.481: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
STEP: Deleting pv and pvc
Sep 20 03:03:44.684: INFO: Deleting PersistentVolumeClaim "pvc-x8zrr"
Sep 20 03:03:44.760: INFO: Deleting PersistentVolume "gcepd-6ws8w"
Sep 20 03:03:46.440: INFO: error deleting PD "e2e-b7cf44e8f4-abe28-edaed45c-6437-4d2e-b661-f65fde8d90f4": googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/disks/e2e-b7cf44e8f4-abe28-edaed45c-6437-4d2e-b661-f65fde8d90f4' is already being used by 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/instances/e2e-b7cf44e8f4-abe28-minion-group-flc5', resourceInUseByAnotherResource
Sep 20 03:03:46.440: INFO: Couldn't delete PD "e2e-b7cf44e8f4-abe28-edaed45c-6437-4d2e-b661-f65fde8d90f4", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/disks/e2e-b7cf44e8f4-abe28-edaed45c-6437-4d2e-b661-f65fde8d90f4' is already being used by 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/instances/e2e-b7cf44e8f4-abe28-minion-group-flc5', resourceInUseByAnotherResource
Sep 20 03:03:53.688: INFO: Successfully deleted PD "e2e-b7cf44e8f4-abe28-edaed45c-6437-4d2e-b661-f65fde8d90f4".
Sep 20 03:03:53.688: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/framework/framework.go:152
Sep 20 03:03:53.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4010" for this suite.
... skipping 259 lines ...
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 20 03:03:46.804: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-3234
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to exceed backoffLimit
  test/e2e/apps/job.go:226
STEP: Creating a job
STEP: Ensuring job exceed backofflimit
STEP: Checking that 2 pod created and status is failed
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:152
Sep 20 03:03:59.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3234" for this suite.
Sep 20 03:04:05.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 20 03:04:06.895: INFO: namespace job-3234 deletion completed in 7.562392752s


• [SLOW TEST:20.092 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should fail to exceed backoffLimit
  test/e2e/apps/job.go:226
------------------------------
S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:93
... skipping 87 lines ...
STEP: Creating the service on top of the pods in kubernetes
Sep 20 03:03:19.135: INFO: Service node-port-service in namespace nettest-1842 found.
Sep 20 03:03:19.309: INFO: Service session-affinity-service in namespace nettest-1842 found.
STEP: dialing(udp) test-container-pod --> 10.0.38.159:90
Sep 20 03:03:19.393: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.2.50:8080/dial?request=hostName&protocol=udp&host=10.0.38.159&port=90&tries=1'] Namespace:nettest-1842 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 03:03:19.393: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 03:03:27.511: INFO: Tries: 10, in try: 0, stdout: {"errors":["reading from udp connection failed. err:'read udp 10.64.2.50:59347-\u003e10.0.38.159:90: i/o timeout'"]}, stderr: , command run in: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"host-test-container-pod", GenerateName:"", Namespace:"nettest-1842", SelfLink:"/api/v1/namespaces/nettest-1842/pods/host-test-container-pod", UID:"6e18966c-70c1-438c-9240-3173fa4901fb", ResourceVersion:"3609", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63704545392, loc:(*time.Location)(0x846e1e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"kubernetes.io/psp":"e2e-test-privileged-psp"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-m2rcn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002fc1100), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"agnhost", Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-m2rcn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003022538), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"e2e-b7cf44e8f4-abe28-minion-group-qhwb", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002f39260), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003022570)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003022590)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003022598), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00302259c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704545392, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704545397, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704545397, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704545392, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.40.0.4", PodIP:"10.40.0.4", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.40.0.4"}}, StartTime:(*v1.Time)(0xc00304a720), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"agnhost", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00304a740), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", ImageID:"docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727", ContainerID:"docker://0871125d77313a29dede1cbb36459d435e8301e0ea03fe79fee726d54e874254", Started:(*bool)(0xc003022610)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
Sep 20 03:03:29.550: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.2.50:8080/dial?request=hostName&protocol=udp&host=10.0.38.159&port=90&tries=1'] Namespace:nettest-1842 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 03:03:29.550: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 03:03:30.759: INFO: Tries: 10, in try: 1, stdout: {"responses":["netserver-2"]}, stderr: , command run in: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"host-test-container-pod", GenerateName:"", Namespace:"nettest-1842", SelfLink:"/api/v1/namespaces/nettest-1842/pods/host-test-container-pod", UID:"6e18966c-70c1-438c-9240-3173fa4901fb", ResourceVersion:"3609", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63704545392, loc:(*time.Location)(0x846e1e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"kubernetes.io/psp":"e2e-test-privileged-psp"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-m2rcn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002fc1100), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"agnhost", Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-m2rcn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003022538), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"e2e-b7cf44e8f4-abe28-minion-group-qhwb", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002f39260), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003022570)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003022590)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003022598), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00302259c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704545392, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704545397, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704545397, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704545392, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.40.0.4", PodIP:"10.40.0.4", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.40.0.4"}}, StartTime:(*v1.Time)(0xc00304a720), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"agnhost", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00304a740), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", ImageID:"docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727", ContainerID:"docker://0871125d77313a29dede1cbb36459d435e8301e0ea03fe79fee726d54e874254", Started:(*bool)(0xc003022610)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
Sep 20 03:03:32.797: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.2.50:8080/dial?request=hostName&protocol=udp&host=10.0.38.159&port=90&tries=1'] Namespace:nettest-1842 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 03:03:32.797: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 03:03:33.766: INFO: Tries: 10, in try: 2, stdout: {"responses":["netserver-2"]}, stderr: , command run in: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"host-test-container-pod", GenerateName:"", Namespace:"nettest-1842", SelfLink:"/api/v1/namespaces/nettest-1842/pods/host-test-container-pod", UID:"6e18966c-70c1-438c-9240-3173fa4901fb", ResourceVersion:"3609", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63704545392, loc:(*time.Location)(0x846e1e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"kubernetes.io/psp":"e2e-test-privileged-psp"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-m2rcn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002fc1100), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"agnhost", Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-m2rcn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003022538), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"e2e-b7cf44e8f4-abe28-minion-group-qhwb", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002f39260), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003022570)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003022590)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003022598), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00302259c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704545392, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704545397, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704545397, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704545392, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.40.0.4", PodIP:"10.40.0.4", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.40.0.4"}}, StartTime:(*v1.Time)(0xc00304a720), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"agnhost", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00304a740), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", ImageID:"docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727", ContainerID:"docker://0871125d77313a29dede1cbb36459d435e8301e0ea03fe79fee726d54e874254", Started:(*bool)(0xc003022610)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
... skipping 229 lines ...
Sep 20 03:03:56.185: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-1675
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:698
STEP: creating the pod
Sep 20 03:03:56.594: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:152
Sep 20 03:04:03.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
Sep 20 03:04:12.511: INFO: namespace init-container-1675 deletion completed in 9.067287461s


• [SLOW TEST:16.326 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:693
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:93
... skipping 1211 lines ...
Sep 20 03:04:04.520: INFO: Trying to get logs from node e2e-b7cf44e8f4-abe28-minion-group-qlm3 pod exec-volume-test-gcepd-klxx container exec-container-gcepd-klxx: <nil>
STEP: delete the pod
Sep 20 03:04:04.670: INFO: Waiting for pod exec-volume-test-gcepd-klxx to disappear
Sep 20 03:04:04.714: INFO: Pod exec-volume-test-gcepd-klxx no longer exists
STEP: Deleting pod exec-volume-test-gcepd-klxx
Sep 20 03:04:04.714: INFO: Deleting pod "exec-volume-test-gcepd-klxx" in namespace "volume-5089"
Sep 20 03:04:06.310: INFO: error deleting PD "e2e-b7cf44e8f4-abe28-c31bd1a9-c241-41ff-b8ab-ce6a8b6879cc": googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/disks/e2e-b7cf44e8f4-abe28-c31bd1a9-c241-41ff-b8ab-ce6a8b6879cc' is already being used by 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/instances/e2e-b7cf44e8f4-abe28-minion-group-qlm3', resourceInUseByAnotherResource
Sep 20 03:04:06.310: INFO: Couldn't delete PD "e2e-b7cf44e8f4-abe28-c31bd1a9-c241-41ff-b8ab-ce6a8b6879cc", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/disks/e2e-b7cf44e8f4-abe28-c31bd1a9-c241-41ff-b8ab-ce6a8b6879cc' is already being used by 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/instances/e2e-b7cf44e8f4-abe28-minion-group-qlm3', resourceInUseByAnotherResource
Sep 20 03:04:12.285: INFO: error deleting PD "e2e-b7cf44e8f4-abe28-c31bd1a9-c241-41ff-b8ab-ce6a8b6879cc": googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/disks/e2e-b7cf44e8f4-abe28-c31bd1a9-c241-41ff-b8ab-ce6a8b6879cc' is already being used by 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/instances/e2e-b7cf44e8f4-abe28-minion-group-qlm3', resourceInUseByAnotherResource
Sep 20 03:04:12.285: INFO: Couldn't delete PD "e2e-b7cf44e8f4-abe28-c31bd1a9-c241-41ff-b8ab-ce6a8b6879cc", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/disks/e2e-b7cf44e8f4-abe28-c31bd1a9-c241-41ff-b8ab-ce6a8b6879cc' is already being used by 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/instances/e2e-b7cf44e8f4-abe28-minion-group-qlm3', resourceInUseByAnotherResource
Sep 20 03:04:19.796: INFO: Successfully deleted PD "e2e-b7cf44e8f4-abe28-c31bd1a9-c241-41ff-b8ab-ce6a8b6879cc".
Sep 20 03:04:19.796: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/framework/framework.go:152
Sep 20 03:04:19.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-5089" for this suite.
... skipping 1027 lines ...
Sep 20 03:04:10.719: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:10.965: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:11.255: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:11.298: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:11.340: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:11.431: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:11.515: INFO: Lookups using dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6055.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6055.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local jessie_udp@dns-test-service-2.dns-6055.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6055.svc.cluster.local]

Sep 20 03:04:16.555: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:16.596: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:16.640: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:16.678: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:16.794: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:16.829: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:16.866: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:16.902: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:16.990: INFO: Lookups using dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6055.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6055.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local jessie_udp@dns-test-service-2.dns-6055.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6055.svc.cluster.local]

Sep 20 03:04:21.551: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:21.587: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:21.625: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:21.661: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:21.775: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:21.834: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:21.880: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:21.956: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:22.035: INFO: Lookups using dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6055.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6055.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local jessie_udp@dns-test-service-2.dns-6055.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6055.svc.cluster.local]

Sep 20 03:04:26.563: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:26.601: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:26.648: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:26.687: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:26.802: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:26.845: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:26.884: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:26.926: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:27.000: INFO: Lookups using dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6055.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6055.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local jessie_udp@dns-test-service-2.dns-6055.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6055.svc.cluster.local]

Sep 20 03:04:31.557: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:31.595: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:31.632: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:31.670: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:31.799: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:31.837: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:31.878: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:31.918: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:32.015: INFO: Lookups using dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6055.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6055.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local jessie_udp@dns-test-service-2.dns-6055.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6055.svc.cluster.local]

Sep 20 03:04:36.652: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:36.795: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:36.957: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:37.098: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6055.svc.cluster.local from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: the server could not find the requested resource (get pods dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9)
Sep 20 03:04:37.385: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9: Get https://104.198.98.163/api/v1/namespaces/dns-6055/pods/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9/proxy/results/wheezy_tcp@PodARecord: stream error: stream ID 1021; INTERNAL_ERROR
Sep 20 03:04:37.863: INFO: Lookups using dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6055.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6055.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6055.svc.cluster.local wheezy_tcp@PodARecord]

Sep 20 03:04:42.072: INFO: DNS probes using dns-6055/dns-test-76364411-a7c5-429d-bce8-4d1ebb0fc2d9 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 1198 lines ...
Sep 20 03:04:48.904: INFO: Trying to get logs from node e2e-b7cf44e8f4-abe28-minion-group-qhwb pod exec-volume-test-gcepd-5jcx container exec-container-gcepd-5jcx: <nil>
STEP: delete the pod
Sep 20 03:04:48.998: INFO: Waiting for pod exec-volume-test-gcepd-5jcx to disappear
Sep 20 03:04:49.051: INFO: Pod exec-volume-test-gcepd-5jcx no longer exists
STEP: Deleting pod exec-volume-test-gcepd-5jcx
Sep 20 03:04:49.051: INFO: Deleting pod "exec-volume-test-gcepd-5jcx" in namespace "volume-8822"
Sep 20 03:04:50.497: INFO: error deleting PD "e2e-b7cf44e8f4-abe28-2bef944e-d2e0-4be7-bf1f-dd58972b1057": googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/disks/e2e-b7cf44e8f4-abe28-2bef944e-d2e0-4be7-bf1f-dd58972b1057' is already being used by 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/instances/e2e-b7cf44e8f4-abe28-minion-group-qhwb', resourceInUseByAnotherResource
Sep 20 03:04:50.497: INFO: Couldn't delete PD "e2e-b7cf44e8f4-abe28-2bef944e-d2e0-4be7-bf1f-dd58972b1057", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/disks/e2e-b7cf44e8f4-abe28-2bef944e-d2e0-4be7-bf1f-dd58972b1057' is already being used by 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/instances/e2e-b7cf44e8f4-abe28-minion-group-qhwb', resourceInUseByAnotherResource
Sep 20 03:04:56.353: INFO: error deleting PD "e2e-b7cf44e8f4-abe28-2bef944e-d2e0-4be7-bf1f-dd58972b1057": googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/disks/e2e-b7cf44e8f4-abe28-2bef944e-d2e0-4be7-bf1f-dd58972b1057' is already being used by 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/instances/e2e-b7cf44e8f4-abe28-minion-group-qhwb', resourceInUseByAnotherResource
Sep 20 03:04:56.353: INFO: Couldn't delete PD "e2e-b7cf44e8f4-abe28-2bef944e-d2e0-4be7-bf1f-dd58972b1057", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/disks/e2e-b7cf44e8f4-abe28-2bef944e-d2e0-4be7-bf1f-dd58972b1057' is already being used by 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/instances/e2e-b7cf44e8f4-abe28-minion-group-qhwb', resourceInUseByAnotherResource
Sep 20 03:05:03.791: INFO: Successfully deleted PD "e2e-b7cf44e8f4-abe28-2bef944e-d2e0-4be7-bf1f-dd58972b1057".
Sep 20 03:05:03.791: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/framework/framework.go:152
Sep 20 03:05:03.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8822" for this suite.
... skipping 139 lines ...
Sep 20 03:04:59.311: INFO: Trying to get logs from node e2e-b7cf44e8f4-abe28-minion-group-flc5 pod exec-volume-test-gcepd-kfdh container exec-container-gcepd-kfdh: <nil>
STEP: delete the pod
Sep 20 03:04:59.406: INFO: Waiting for pod exec-volume-test-gcepd-kfdh to disappear
Sep 20 03:04:59.443: INFO: Pod exec-volume-test-gcepd-kfdh no longer exists
STEP: Deleting pod exec-volume-test-gcepd-kfdh
Sep 20 03:04:59.443: INFO: Deleting pod "exec-volume-test-gcepd-kfdh" in namespace "volume-2720"
Sep 20 03:05:00.498: INFO: error deleting PD "e2e-b7cf44e8f4-abe28-91ff924b-330b-46e6-b966-b132ede45029": googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/disks/e2e-b7cf44e8f4-abe28-91ff924b-330b-46e6-b966-b132ede45029' is already being used by 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/instances/e2e-b7cf44e8f4-abe28-minion-group-flc5', resourceInUseByAnotherResource
Sep 20 03:05:00.498: INFO: Couldn't delete PD "e2e-b7cf44e8f4-abe28-91ff924b-330b-46e6-b966-b132ede45029", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/disks/e2e-b7cf44e8f4-abe28-91ff924b-330b-46e6-b966-b132ede45029' is already being used by 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/instances/e2e-b7cf44e8f4-abe28-minion-group-flc5', resourceInUseByAnotherResource
Sep 20 03:05:07.806: INFO: Successfully deleted PD "e2e-b7cf44e8f4-abe28-91ff924b-330b-46e6-b966-b132ede45029".
Sep 20 03:05:07.806: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/framework/framework.go:152
Sep 20 03:05:07.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-2720" for this suite.
... skipping 796 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/common/sysctl.go:63
[It] should support unsafe sysctls which are actually whitelisted
  test/e2e/common/sysctl.go:110
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/framework/framework.go:152
... skipping 59 lines ...
Sep 20 03:01:55.815: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename cronjob
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-2421
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] CronJob
  test/e2e/apps/cronjob.go:55
[It] should delete successful/failed finished jobs with limit of one job
  test/e2e/apps/cronjob.go:233
STEP: Creating a AllowConcurrent cronjob with custom successful-jobs-history-limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
STEP: Removing cronjob
STEP: Creating a AllowConcurrent cronjob with custom failed-jobs-history-limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
STEP: Removing cronjob
[AfterEach] [sig-apps] CronJob
... skipping 4 lines ...
Sep 20 03:05:25.101: INFO: namespace cronjob-2421 deletion completed in 9.63840881s


• [SLOW TEST:209.286 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should delete successful/failed finished jobs with limit of one job
  test/e2e/apps/cronjob.go:233
------------------------------
S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:151
... skipping 898 lines ...
STEP: waiting for the service to expose an endpoint
STEP: waiting up to 3m0s for service hairpin-test in namespace services-142 to expose endpoints map[hairpin:[8080]]
Sep 20 03:05:01.365: INFO: successfully validated that service hairpin-test in namespace services-142 exposes endpoints map[hairpin:[8080]] (3.301785374s elapsed)
STEP: Checking if the pod can reach itself
Sep 20 03:05:02.365: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.98.163 --kubeconfig=/workspace/.kube/config exec --namespace=services-142 hairpin -- /bin/sh -x -c nc -zv -t -w 2 hairpin-test 8080'
Sep 20 03:05:05.213: INFO: rc: 1
Sep 20 03:05:05.213: INFO: Service reachability failing with error: error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.98.163 --kubeconfig=/workspace/.kube/config exec --namespace=services-142 hairpin -- /bin/sh -x -c nc -zv -t -w 2 hairpin-test 8080] []  <nil>  + nc -zv -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1
 [] <nil> 0xc000dfa8d0 exit status 1 <nil> <nil> true [0xc00317caa0 0xc00317cab8 0xc00317cad0] [0xc00317caa0 0xc00317cab8 0xc00317cad0] [0xc00317cab0 0xc00317cac8] [0x10efcb0 0x10efcb0] 0xc001ab9080 <nil>}:
Command stdout:

stderr:
+ nc -zv -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 20 03:05:06.213: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.98.163 --kubeconfig=/workspace/.kube/config exec --namespace=services-142 hairpin -- /bin/sh -x -c nc -zv -t -w 2 hairpin-test 8080'
Sep 20 03:05:07.230: INFO: stderr: "+ nc -zv -t -w 2 hairpin-test 8080\nConnection to hairpin-test 8080 port [tcp/http-alt] succeeded!\n"
Sep 20 03:05:07.230: INFO: stdout: ""
Sep 20 03:05:07.231: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.98.163 --kubeconfig=/workspace/.kube/config exec --namespace=services-142 hairpin -- /bin/sh -x -c nc -zv -t -w 2 10.0.142.69 8080'
... skipping 691 lines ...
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 20 03:05:11.356: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-8784
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail when exceeds active deadline
  test/e2e/apps/job.go:130
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:152
Sep 20 03:05:13.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
Sep 20 03:06:03.387: INFO: namespace job-8784 deletion completed in 49.545149276s


• [SLOW TEST:52.031 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should fail when exceeds active deadline
  test/e2e/apps/job.go:130
------------------------------
S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:93
... skipping 922 lines ...
Sep 20 03:05:46.626: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.98.163 --kubeconfig=/workspace/.kube/config exec gcepd-client --namespace=volume-1627 -- grep  /opt/0  /proc/mounts'
Sep 20 03:05:48.704: INFO: stderr: ""
Sep 20 03:05:48.704: INFO: stdout: "/dev/sdb /opt/0 ext3 rw,relatime 0 0\n"
STEP: cleaning the environment after gcepd
Sep 20 03:05:48.704: INFO: Deleting pod "gcepd-client" in namespace "volume-1627"
Sep 20 03:05:48.763: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
Sep 20 03:06:02.017: INFO: error deleting PD "e2e-b7cf44e8f4-abe28-e5ca4115-0f38-4d7b-a45c-32cb6849966e": googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/disks/e2e-b7cf44e8f4-abe28-e5ca4115-0f38-4d7b-a45c-32cb6849966e' is already being used by 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/instances/e2e-b7cf44e8f4-abe28-minion-group-flc5', resourceInUseByAnotherResource
Sep 20 03:06:02.017: INFO: Couldn't delete PD "e2e-b7cf44e8f4-abe28-e5ca4115-0f38-4d7b-a45c-32cb6849966e", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/disks/e2e-b7cf44e8f4-abe28-e5ca4115-0f38-4d7b-a45c-32cb6849966e' is already being used by 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/instances/e2e-b7cf44e8f4-abe28-minion-group-flc5', resourceInUseByAnotherResource
Sep 20 03:06:09.323: INFO: Successfully deleted PD "e2e-b7cf44e8f4-abe28-e5ca4115-0f38-4d7b-a45c-32cb6849966e".
Sep 20 03:06:09.323: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/framework/framework.go:152
Sep 20 03:06:09.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-1627" for this suite.
... skipping 310 lines ...
STEP: cleaning the environment after gcepd
Sep 20 03:05:49.568: INFO: Deleting pod "gcepd-client" in namespace "volume-3985"
Sep 20 03:05:49.609: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
STEP: Deleting pv and pvc
Sep 20 03:06:01.691: INFO: Deleting PersistentVolumeClaim "pvc-lqnvr"
Sep 20 03:06:01.747: INFO: Deleting PersistentVolume "gcepd-hkrqf"
Sep 20 03:06:03.043: INFO: error deleting PD "e2e-b7cf44e8f4-abe28-757e5530-1830-411e-80b8-97656779d490": googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/disks/e2e-b7cf44e8f4-abe28-757e5530-1830-411e-80b8-97656779d490' is already being used by 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/instances/e2e-b7cf44e8f4-abe28-minion-group-flc5', resourceInUseByAnotherResource
Sep 20 03:06:03.043: INFO: Couldn't delete PD "e2e-b7cf44e8f4-abe28-757e5530-1830-411e-80b8-97656779d490", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/disks/e2e-b7cf44e8f4-abe28-757e5530-1830-411e-80b8-97656779d490' is already being used by 'projects/k8s-jkns-e2e-gce-ubuntu/zones/us-west1-b/instances/e2e-b7cf44e8f4-abe28-minion-group-flc5', resourceInUseByAnotherResource
Sep 20 03:06:11.156: INFO: Successfully deleted PD "e2e-b7cf44e8f4-abe28-757e5530-1830-411e-80b8-97656779d490".
Sep 20 03:06:11.156: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  test/e2e/framework/framework.go:152
Sep 20 03:06:11.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-3985" for this suite.
... skipping 969 lines ...
Sep 20 03:05:38.899: INFO: PersistentVolumeClaim csi-hostpathnrhjp found but phase is Pending instead of Bound.
Sep 20 03:05:40.939: INFO: PersistentVolumeClaim csi-hostpathnrhjp found but phase is Pending instead of Bound.
Sep 20 03:05:42.977: INFO: PersistentVolumeClaim csi-hostpathnrhjp found but phase is Pending instead of Bound.
Sep 20 03:05:45.020: INFO: PersistentVolumeClaim csi-hostpathnrhjp found and phase=Bound (16.375375281s)
STEP: Expanding non-expandable pvc
Sep 20 03:05:45.096: INFO: currentPvcSize {{5368709120 0} {<nil>} 5Gi BinarySI}, newSize {{6442450944 0} {<nil>}  BinarySI}
Sep 20 03:05:45.179: INFO: Error updating pvc csi-hostpathnrhjp with persistentvolumeclaims "csi-hostpathnrhjp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:05:47.286: INFO: Error updating pvc csi-hostpathnrhjp with persistentvolumeclaims "csi-hostpathnrhjp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:05:49.276: INFO: Error updating pvc csi-hostpathnrhjp with persistentvolumeclaims "csi-hostpathnrhjp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:05:51.443: INFO: Error updating pvc csi-hostpathnrhjp with persistentvolumeclaims "csi-hostpathnrhjp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:05:53.267: INFO: Error updating pvc csi-hostpathnrhjp with persistentvolumeclaims "csi-hostpathnrhjp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:05:55.257: INFO: Error updating pvc csi-hostpathnrhjp with persistentvolumeclaims "csi-hostpathnrhjp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:05:57.261: INFO: Error updating pvc csi-hostpathnrhjp with persistentvolumeclaims "csi-hostpathnrhjp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:05:59.597: INFO: Error updating pvc csi-hostpathnrhjp with persistentvolumeclaims "csi-hostpathnrhjp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:06:01.277: INFO: Error updating pvc csi-hostpathnrhjp with persistentvolumeclaims "csi-hostpathnrhjp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:06:03.258: INFO: Error updating pvc csi-hostpathnrhjp with persistentvolumeclaims "csi-hostpathnrhjp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:06:05.258: INFO: Error updating pvc csi-hostpathnrhjp with persistentvolumeclaims "csi-hostpathnrhjp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:06:07.314: INFO: Error updating pvc csi-hostpathnrhjp with persistentvolumeclaims "csi-hostpathnrhjp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:06:09.322: INFO: Error updating pvc csi-hostpathnrhjp with persistentvolumeclaims "csi-hostpathnrhjp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:06:11.464: INFO: Error updating pvc csi-hostpathnrhjp with persistentvolumeclaims "csi-hostpathnrhjp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:06:13.317: INFO: Error updating pvc csi-hostpathnrhjp with persistentvolumeclaims "csi-hostpathnrhjp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:06:15.594: INFO: Error updating pvc csi-hostpathnrhjp with persistentvolumeclaims "csi-hostpathnrhjp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 03:06:15.879: INFO: Error updating pvc csi-hostpathnrhjp with persistentvolumeclaims "csi-hostpathnrhjp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Sep 20 03:06:15.879: INFO: Deleting PersistentVolumeClaim "csi-hostpathnrhjp"
Sep 20 03:06:16.025: INFO: Waiting up to 5m0s for PersistentVolume pvc-f8a43c03-583d-4f3a-a923-d38b3debf77a to get deleted
Sep 20 03:06:16.111: INFO: PersistentVolume pvc-f8a43c03-583d-4f3a-a923-d38b3debf77a found and phase=Bound (86.032757ms)
Sep 20 03:06:21.149: INFO: PersistentVolume pvc-f8a43c03-583d-4f3a-a923-d38b3debf77a was removed
STEP: Deleting sc
... skipping 287 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-5572
STEP: Creating statefulset with conflicting port in namespace statefulset-5572
STEP: Waiting until pod test-pod will start running in namespace statefulset-5572
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5572
Sep 20 03:05:52.781: INFO: Observed stateful pod in namespace: statefulset-5572, name: ss-0, uid: dbc9df04-7662-4222-a722-1d54e6cbb1d6, status phase: Pending. Waiting for statefulset controller to delete.
Sep 20 03:05:57.873: INFO: Observed stateful pod in namespace: statefulset-5572, name: ss-0, uid: dbc9df04-7662-4222-a722-1d54e6cbb1d6, status phase: Failed. Waiting for statefulset controller to delete.
Sep 20 03:05:58.022: INFO: Observed stateful pod in namespace: statefulset-5572, name: ss-0, uid: dbc9df04-7662-4222-a722-1d54e6cbb1d6, status phase: Failed. Waiting for statefulset controller to delete.
Sep 20 03:05:58.149: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5572
STEP: Removing pod with conflicting port in namespace statefulset-5572
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5572 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:89
Sep 20 03:06:21.398: INFO: Deleting all statefulset in ns statefulset-5572
... skipping 1376 lines ...
STEP: Creating the service on top of the pods in kubernetes
Sep 20 03:06:07.178: INFO: Service node-port-service in namespace nettest-9670 found.
Sep 20 03:06:07.437: INFO: Service session-affinity-service in namespace nettest-9670 found.
STEP: dialing(udp) 35.185.231.42 (node) --> 10.0.163.158:90 (config.clusterIP)
Sep 20 03:06:07.589: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.0.163.158 90 | grep -v '^\s*$'] Namespace:nettest-9670 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 03:06:07.589: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 03:06:09.451: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.0.163.158 90 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 20 03:06:09.451: INFO: Waiting for [netserver-0 netserver-1 netserver-2] endpoints (expected=[netserver-0 netserver-1 netserver-2], actual=[])
Sep 20 03:06:11.708: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.0.163.158 90 | grep -v '^\s*$'] Namespace:nettest-9670 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 03:06:11.708: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 03:06:14.066: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.0.163.158 90 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 20 03:06:14.066: INFO: Waiting for [netserver-0 netserver-1 netserver-2] endpoints (expected=[netserver-0 netserver-1 netserver-2], actual=[])
Sep 20 03:06:16.124: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.0.163.158 90 | grep -v '^\s*$'] Namespace:nettest-9670 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 03:06:16.125: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 03:06:18.154: INFO: Waiting for [netserver-0 netserver-2] endpoints (expected=[netserver-0 netserver-1 netserver-2], actual=[netserver-1])
Sep 20 03:06:20.200: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.0.163.158 90 | grep -v '^\s*$'] Namespace:nettest-9670 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 03:06:20.200: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 53 lines ...
Sep 20 03:05:52.041: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-7714
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:698
STEP: creating the pod
Sep 20 03:05:52.581: INFO: PodSpec: initContainers in spec.initContainers
Sep 20 03:06:44.003: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-95c2b41b-f3bc-4fe8-8aa0-50857b9cc2db", GenerateName:"", Namespace:"init-container-7714", SelfLink:"/api/v1/namespaces/init-container-7714/pods/pod-init-95c2b41b-f3bc-4fe8-8aa0-50857b9cc2db", UID:"a0dbcced-17bf-4c82-ac3f-156412c8c5b7", ResourceVersion:"9440", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63704545552, loc:(*time.Location)(0x846e1e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"581042183"}, Annotations:map[string]string{"kubernetes.io/psp":"e2e-test-privileged-psp"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-ng6tt", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001891240), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ng6tt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ng6tt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ng6tt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002147fa8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"e2e-b7cf44e8f4-abe28-minion-group-flc5", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001a384e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0014d2020)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0014d2040)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0014d2048), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0014d204c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704545552, loc:(*time.Location)(0x846e1e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704545552, loc:(*time.Location)(0x846e1e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704545552, loc:(*time.Location)(0x846e1e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704545552, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.40.0.3", PodIP:"10.64.1.87", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.64.1.87"}}, StartTime:(*v1.Time)(0xc0022e3f60), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0007489a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000748a10)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"docker://cfb9d61d99f631030ceeb3cf218fe984bd780ac5049aaa23066a55f97504df9c", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0022e3fa0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0022e3f80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0014d20ef)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:152
Sep 20 03:06:44.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7714" for this suite.
Sep 20 03:07:12.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 20 03:07:13.548: INFO: namespace init-container-7714 deletion completed in 29.504040079s


• [SLOW TEST:81.508 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:693
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 20 03:07:04.888: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 849 lines ...
Sep 20 03:06:56.480: INFO: Node name not specified for getVolumeOpCounts, falling back to listing nodes from API Server
Sep 20 03:06:58.649: INFO: Creating resource for dynamic PV
STEP: creating a StorageClass volume-expand-6332-gcepd-scntpdr
STEP: creating a claim
STEP: Expanding non-expandable pvc
Sep 20 03:06:58.785: INFO: currentPvcSize {{5368709120 0} {<nil>} 5Gi BinarySI}, newSize {{6442450944 0} {<nil>}  BinarySI}
Sep 20 03:06:58.871: INFO: Error updating pvc gcepdb84ps with PersistentVolumeClaim "gcepdb84ps" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 03:07:00.948: INFO: Error updating pvc gcepdb84ps with PersistentVolumeClaim "gcepdb84ps" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 03:07:02.950: INFO: Error updating pvc gcepdb84ps with PersistentVolumeClaim "gcepdb84ps" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 03:07:04.950: INFO: Error updating pvc gcepdb84ps with PersistentVolumeClaim "gcepdb84ps" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 03:07:06.949: INFO: Error updating pvc gcepdb84ps with PersistentVolumeClaim "gcepdb84ps" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 03:07:08.947: INFO: Error updating pvc gcepdb84ps with PersistentVolumeClaim "gcepdb84ps" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 03:07:10.950: INFO: Error updating pvc gcepdb84ps with PersistentVolumeClaim "gcepdb84ps" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 03:07:12.951: INFO: Error updating pvc gcepdb84ps with PersistentVolumeClaim "gcepdb84ps" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 03:07:15.014: INFO: Error updating pvc gcepdb84ps with PersistentVolumeClaim "gcepdb84ps" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 03:07:16.948: INFO: Error updating pvc gcepdb84ps with PersistentVolumeClaim "gcepdb84ps" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 03:07:18.967: INFO: Error updating pvc gcepdb84ps with PersistentVolumeClaim "gcepdb84ps" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 03:07:20.952: INFO: Error updating pvc gcepdb84ps with PersistentVolumeClaim "gcepdb84ps" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 03:07:22.968: INFO: Error updating pvc gcepdb84ps with PersistentVolumeClaim "gcepdb84ps" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 03:07:24.949: INFO: Error updating pvc gcepdb84ps with PersistentVolumeClaim "gcepdb84ps" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 03:07:26.950: INFO: Error updating pvc gcepdb84ps with PersistentVolumeClaim "gcepdb84ps" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 03:07:29.082: INFO: Error updating pvc gcepdb84ps with PersistentVolumeClaim "gcepdb84ps" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 20 03:07:29.346: INFO: Error updating pvc gcepdb84ps with PersistentVolumeClaim "gcepdb84ps" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
STEP: Deleting pvc
Sep 20 03:07:29.346: INFO: Deleting PersistentVolumeClaim "gcepdb84ps"
STEP: Deleting sc
Sep 20 03:07:29.497: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  test/e2e/framework/framework.go:152
... skipping 734 lines ...
STEP: Deleting the previously created pod
Sep 20 03:06:58.118: INFO: Deleting pod "pvc-volume-tester-x8k29" in namespace "csi-mock-volumes-3700"
Sep 20 03:06:58.162: INFO: Wait up to 5m0s for pod "pvc-volume-tester-x8k29" to be fully deleted
STEP: Checking CSI driver logs
Sep 20 03:07:06.300: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3700","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-1319a70e-6580-496a-926e-1873e535293b","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-1319a70e-6580-496a-926e-1873e535293b"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3700","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-3700","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3700","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-3700","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-1319a70e-6580-496a-926e-1873e535293b","storage.kubernetes.io/csiProvisionerIdentity":"1568948791274-8081-csi-mock-csi-mock-volumes-3700"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1319a70e-6580-496a-926e-1873e535293b/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-1319a70e-6580-496a-926e-1873e535293b","storage.kubernetes.io/csiProvisionerIdentity":"1568948791274-8081-csi-mock-csi-mock-volumes-3700"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1319a70e-6580-496a-926e-1873e535293b/globalmount","target_path":"/var/lib/kubelet/pods/80313b7c-ec84-42db-80ee-4e83d47c30c5/volumes/kubernetes.io~csi/pvc-1319a70e-6580-496a-926e-1873e535293b/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-1319a70e-6580-496a-926e-1873e535293b","storage.kubernetes.io/csiProvisionerIdentity":"1568948791274-8081-csi-mock-csi-mock-volumes-3700"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/80313b7c-ec84-42db-80ee-4e83d47c30c5/volumes/kubernetes.io~csi/pvc-1319a70e-6580-496a-926e-1873e535293b/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1319a70e-6580-496a-926e-1873e535293b/globalmount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerUnpublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-3700"},"Response":{},"Error":""}

Sep 20 03:07:06.300: INFO: Found NodeUnpublishVolume: {Method:/csi.v1.Node/NodeUnpublishVolume Request:{VolumeContext:map[]}}
STEP: Deleting pod pvc-volume-tester-x8k29
Sep 20 03:07:06.300: INFO: Deleting pod "pvc-volume-tester-x8k29" in namespace "csi-mock-volumes-3700"
STEP: Deleting claim pvc-599qf
Sep 20 03:07:06.418: INFO: Waiting up to 2m0s for PersistentVolume pvc-1319a70e-6580-496a-926e-1873e535293b to get deleted
... skipping 488 lines ...
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 20 03:07:38.127: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-3314
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:698
STEP: Creating projection with secret that has name secret-emptykey-test-7cd525bf-3d94-46e3-a096-488ae8455c02
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:152
Sep 20 03:07:38.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3314" for this suite.
Sep 20 03:07:44.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 20 03:07:46.142: INFO: namespace secrets-3314 deletion completed in 7.586203007s


• [SLOW TEST:8.015 seconds]
[sig-api-machinery] Secrets
test/e2e/common/secrets.go:32
  should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
SSS
------------------------------
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:151
... skipping 2107 lines ...
STEP: creating an object not containing a namespace with in-cluster config
Sep 20 03:08:08.955: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.98.163 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-2337 httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1'
Sep 20 03:08:10.076: INFO: rc: 255
STEP: trying to use kubectl with invalid token
Sep 20 03:08:10.076: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.98.163 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-2337 httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1'
Sep 20 03:08:11.483: INFO: rc: 255
Sep 20 03:08:11.483: INFO: got err error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.98.163 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-2337 httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1] []  <nil> I0920 03:08:11.164680     191 merged_client_builder.go:164] Using in-cluster namespace
I0920 03:08:11.164957     191 merged_client_builder.go:122] Using in-cluster configuration
I0920 03:08:11.175516     191 merged_client_builder.go:122] Using in-cluster configuration
I0920 03:08:11.188955     191 merged_client_builder.go:122] Using in-cluster configuration
I0920 03:08:11.189589     191 round_trippers.go:420] GET https://10.0.0.1:443/api/v1/namespaces/kubectl-2337/pods?limit=500
I0920 03:08:11.189606     191 round_trippers.go:427] Request Headers:
I0920 03:08:11.189615     191 round_trippers.go:431]     Accept: application/json;as=Table;v=v1beta1;g=meta.k8s.io, application/json
... skipping 6 lines ...
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}]
F0920 03:08:11.290588     191 helpers.go:114] error: You must be logged in to the server (Unauthorized)
 + /tmp/kubectl get pods '--token=invalid' '--v=7'
command terminated with exit code 255
 [] <nil> 0xc000e26780 exit status 255 <nil> <nil> true [0xc00019c320 0xc0000ec148 0xc0000eda98] [0xc00019c320 0xc0000ec148 0xc0000eda98] [0xc00019de28 0xc0000ed3f0] [0x10efcb0 0x10efcb0] 0xc002c06540 <nil>}:
Command stdout:
I0920 03:08:11.164680     191 merged_client_builder.go:164] Using in-cluster namespace
I0920 03:08:11.164957     191 merged_client_builder.go:122] Using in-cluster configuration
... skipping 11 lines ...
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}]
F0920 03:08:11.290588     191 helpers.go:114] error: You must be logged in to the server (Unauthorized)

stderr:
+ /tmp/kubectl get pods '--token=invalid' '--v=7'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid server
Sep 20 03:08:11.483: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.98.163 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-2337 httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1'
Sep 20 03:08:12.702: INFO: rc: 255
Sep 20 03:08:12.702: INFO: got err error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.98.163 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-2337 httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1] []  <nil> I0920 03:08:12.568245     202 merged_client_builder.go:164] Using in-cluster namespace
I0920 03:08:12.598625     202 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 29 milliseconds
I0920 03:08:12.598735     202 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 03:08:12.613544     202 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 14 milliseconds
I0920 03:08:12.613665     202 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 03:08:12.613686     202 shortcut.go:89] Error loading discovery information: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 03:08:12.622997     202 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 9 milliseconds
I0920 03:08:12.623077     202 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 03:08:12.626078     202 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 2 milliseconds
I0920 03:08:12.626234     202 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 03:08:12.629221     202 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 2 milliseconds
I0920 03:08:12.629280     202 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 03:08:12.629609     202 helpers.go:217] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
F0920 03:08:12.629653     202 helpers.go:114] Unable to connect to the server: dial tcp: lookup invalid on 10.0.0.10:53: no such host
 + /tmp/kubectl get pods '--server=invalid' '--v=6'
command terminated with exit code 255
 [] <nil> 0xc00267d560 exit status 255 <nil> <nil> true [0xc001386cc0 0xc001386d88 0xc001386ef0] [0xc001386cc0 0xc001386d88 0xc001386ef0] [0xc001386d38 0xc001386e78] [0x10efcb0 0x10efcb0] 0xc0014888a0 <nil>}:
Command stdout:
I0920 03:08:12.568245     202 merged_client_builder.go:164] Using in-cluster namespace
I0920 03:08:12.598625     202 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 29 milliseconds
I0920 03:08:12.598735     202 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 03:08:12.613544     202 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 14 milliseconds
I0920 03:08:12.613665     202 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 03:08:12.613686     202 shortcut.go:89] Error loading discovery information: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 03:08:12.622997     202 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 9 milliseconds
I0920 03:08:12.623077     202 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 03:08:12.626078     202 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 2 milliseconds
I0920 03:08:12.626234     202 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 03:08:12.629221     202 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 2 milliseconds
I0920 03:08:12.629280     202 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 03:08:12.629609     202 helpers.go:217] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
F0920 03:08:12.629653     202 helpers.go:114] Unable to connect to the server: dial tcp: lookup invalid on 10.0.0.10:53: no such host

stderr:
+ /tmp/kubectl get pods '--server=invalid' '--v=6'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid namespace
Sep 20 03:08:12.702: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.98.163 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-2337 httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1'
Sep 20 03:08:13.624: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n"
Sep 20 03:08:13.624: INFO: stdout: "I0920 03:08:13.454092     213 merged_client_builder.go:122] Using in-cluster configuration\nI0920 03:08:13.459825     213 merged_client_builder.go:122] Using in-cluster configuration\nI0920 03:08:13.467160     213 merged_client_builder.go:122] Using in-cluster configuration\nI0920 03:08:13.490036     213 round_trippers.go:443] GET https://10.0.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 22 milliseconds\nNo resources found in invalid namespace.\n"
Sep 20 03:08:13.624: INFO: stdout: I0920 03:08:13.454092     213 merged_client_builder.go:122] Using in-cluster configuration
... skipping 4077 lines ...
test/e2e/kubectl/framework.go:23
  Update Demo
  test/e2e/kubectl/kubectl.go:273
    should scale a replication controller  [Conformance]
    test/e2e/framework/framework.go:698
------------------------------
S{"component":"entrypoint","file":"prow/entrypoint/run.go:163","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","time":"2019-09-20T03:09:07Z"}
Traceback (most recent call last):
  File "../test-infra/scenarios/kubernetes_e2e.py", line 778, in <module>
    main(parse_args())
  File "../test-infra/scenarios/kubernetes_e2e.py", line 626, in main
    mode.start(runner_args)
  File "../test-infra/scenarios/kubernetes_e2e.py", line 262, in start
... skipping 13 lines ...