This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2019-09-20 03:57
Elapsed29m8s
Revision0f4dfb764319a8e686da906c7f42cfa861195b85
Refs 82703

No Test Failures!


Error lines from build-log.txt

... skipping 142 lines ...
INFO: 5212 processes: 5133 remote cache hit, 29 processwrapper-sandbox, 50 remote.
INFO: Build completed successfully, 5305 total actions
INFO: Build completed successfully, 5305 total actions
make: Leaving directory '/home/prow/go/src/k8s.io/kubernetes'
2019/09/20 04:04:01 process.go:155: Step 'make -C /home/prow/go/src/k8s.io/kubernetes bazel-release' finished in 5m58.383661262s
2019/09/20 04:04:01 util.go:255: Flushing memory.
2019/09/20 04:04:02 util.go:265: flushMem error (page cache): exit status 1
2019/09/20 04:04:02 process.go:153: Running: /home/prow/go/src/k8s.io/release/push-build.sh --nomock --verbose --noupdatelatest --bucket=kubernetes-release-pull --ci --gcs-suffix=/pull-kubernetes-e2e-gce --allow-dup
push-build.sh: BEGIN main on 86330d84-db5a-11e9-b604-5eb296997ac8 Fri Sep 20 04:04:02 UTC 2019

$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
INFO: Invocation ID: 6cf9ab4d-002a-4f06-90a4-e3cb083ca846
Loading: 
... skipping 848 lines ...
Trying to find master named 'e2e-4c09d0cdbb-abe28-master'
Looking for address 'e2e-4c09d0cdbb-abe28-master-ip'
Using master: e2e-4c09d0cdbb-abe28-master (external IP: 34.83.200.78; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

...........Kubernetes cluster created.
Cluster "k8s-jkns-gce-reboot-1-6_e2e-4c09d0cdbb-abe28" set.
User "k8s-jkns-gce-reboot-1-6_e2e-4c09d0cdbb-abe28" set.
Context "k8s-jkns-gce-reboot-1-6_e2e-4c09d0cdbb-abe28" created.
Switched to context "k8s-jkns-gce-reboot-1-6_e2e-4c09d0cdbb-abe28".
... skipping 3102 lines ...
STEP: creating execpod-noendpoints on node e2e-4c09d0cdbb-abe28-minion-group-1kz0
Sep 20 04:15:48.954: INFO: Creating new exec pod
Sep 20 04:15:55.519: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node e2e-4c09d0cdbb-abe28-minion-group-1kz0
Sep 20 04:15:55.519: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec --namespace=services-5294 execpod-noendpoints94g7t -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Sep 20 04:15:58.434: INFO: rc: 1
Sep 20 04:15:58.434: INFO: error contained 'REFUSED', as expected: error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec --namespace=services-5294 execpod-noendpoints94g7t -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80] []  <nil>  + /agnhost connect --timeout=3s no-pods:80
REFUSED
command terminated with exit code 1
 [] <nil> 0xc0029cc720 exit status 1 <nil> <nil> true [0xc0022e4518 0xc0022e4530 0xc0022e4548] [0xc0022e4518 0xc0022e4530 0xc0022e4548] [0xc0022e4528 0xc0022e4540] [0x10efcb0 0x10efcb0] 0xc0029ce0c0 <nil>}:
Command stdout:

stderr:
+ /agnhost connect --timeout=3s no-pods:80
REFUSED
command terminated with exit code 1

error:
exit status 1
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:152
Sep 20 04:15:58.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5294" for this suite.
Sep 20 04:16:04.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 463 lines ...
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 20 04:15:39.125: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-8959
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:698
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:152
Sep 20 04:16:01.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
Sep 20 04:16:11.341: INFO: namespace job-8959 deletion completed in 9.68065634s


• [SLOW TEST:32.216 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/storage/testsuites/base.go:93
... skipping 984 lines ...
Sep 20 04:16:19.490: INFO: Pod exec-volume-test-gcepd-preprovisionedpv-llb7 no longer exists
STEP: Deleting pod exec-volume-test-gcepd-preprovisionedpv-llb7
Sep 20 04:16:19.490: INFO: Deleting pod "exec-volume-test-gcepd-preprovisionedpv-llb7" in namespace "volume-9705"
STEP: Deleting pv and pvc
Sep 20 04:16:19.527: INFO: Deleting PersistentVolumeClaim "pvc-vcldr"
Sep 20 04:16:19.563: INFO: Deleting PersistentVolume "gcepd-srwvg"
Sep 20 04:16:20.543: INFO: error deleting PD "e2e-4c09d0cdbb-abe28-271c70b0-bb67-4b23-93a5-f02b2148e81b": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/disks/e2e-4c09d0cdbb-abe28-271c70b0-bb67-4b23-93a5-f02b2148e81b' is already being used by 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/instances/e2e-4c09d0cdbb-abe28-minion-group-1kz0', resourceInUseByAnotherResource
Sep 20 04:16:20.543: INFO: Couldn't delete PD "e2e-4c09d0cdbb-abe28-271c70b0-bb67-4b23-93a5-f02b2148e81b", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/disks/e2e-4c09d0cdbb-abe28-271c70b0-bb67-4b23-93a5-f02b2148e81b' is already being used by 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/instances/e2e-4c09d0cdbb-abe28-minion-group-1kz0', resourceInUseByAnotherResource
Sep 20 04:16:28.091: INFO: Successfully deleted PD "e2e-4c09d0cdbb-abe28-271c70b0-bb67-4b23-93a5-f02b2148e81b".
Sep 20 04:16:28.091: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/framework/framework.go:152
Sep 20 04:16:28.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-9705" for this suite.
... skipping 5007 lines ...
STEP: cleaning the environment after gcepd
Sep 20 04:17:26.203: INFO: Deleting pod "gcepd-client" in namespace "volume-7495"
Sep 20 04:17:26.251: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
STEP: Deleting pv and pvc
Sep 20 04:17:38.351: INFO: Deleting PersistentVolumeClaim "pvc-vjfxt"
Sep 20 04:17:38.418: INFO: Deleting PersistentVolume "gcepd-d4l5v"
Sep 20 04:17:39.688: INFO: error deleting PD "e2e-4c09d0cdbb-abe28-e4fcf186-4d47-47f6-a1cb-ae51e49077cb": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/disks/e2e-4c09d0cdbb-abe28-e4fcf186-4d47-47f6-a1cb-ae51e49077cb' is already being used by 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/instances/e2e-4c09d0cdbb-abe28-minion-group-1kz0', resourceInUseByAnotherResource
Sep 20 04:17:39.688: INFO: Couldn't delete PD "e2e-4c09d0cdbb-abe28-e4fcf186-4d47-47f6-a1cb-ae51e49077cb", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/disks/e2e-4c09d0cdbb-abe28-e4fcf186-4d47-47f6-a1cb-ae51e49077cb' is already being used by 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/instances/e2e-4c09d0cdbb-abe28-minion-group-1kz0', resourceInUseByAnotherResource
Sep 20 04:17:45.712: INFO: error deleting PD "e2e-4c09d0cdbb-abe28-e4fcf186-4d47-47f6-a1cb-ae51e49077cb": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/disks/e2e-4c09d0cdbb-abe28-e4fcf186-4d47-47f6-a1cb-ae51e49077cb' is already being used by 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/instances/e2e-4c09d0cdbb-abe28-minion-group-1kz0', resourceInUseByAnotherResource
Sep 20 04:17:45.712: INFO: Couldn't delete PD "e2e-4c09d0cdbb-abe28-e4fcf186-4d47-47f6-a1cb-ae51e49077cb", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/disks/e2e-4c09d0cdbb-abe28-e4fcf186-4d47-47f6-a1cb-ae51e49077cb' is already being used by 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/instances/e2e-4c09d0cdbb-abe28-minion-group-1kz0', resourceInUseByAnotherResource
Sep 20 04:17:53.157: INFO: Successfully deleted PD "e2e-4c09d0cdbb-abe28-e4fcf186-4d47-47f6-a1cb-ae51e49077cb".
Sep 20 04:17:53.157: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  test/e2e/framework/framework.go:152
Sep 20 04:17:53.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-7495" for this suite.
... skipping 2101 lines ...
Sep 20 04:17:21.059: INFO: PersistentVolumeClaim csi-hostpathwmcfk found but phase is Pending instead of Bound.
Sep 20 04:17:23.202: INFO: PersistentVolumeClaim csi-hostpathwmcfk found but phase is Pending instead of Bound.
Sep 20 04:17:25.256: INFO: PersistentVolumeClaim csi-hostpathwmcfk found but phase is Pending instead of Bound.
Sep 20 04:17:27.296: INFO: PersistentVolumeClaim csi-hostpathwmcfk found and phase=Bound (33.035090487s)
STEP: Expanding non-expandable pvc
Sep 20 04:17:27.372: INFO: currentPvcSize {{5368709120 0} {<nil>} 5Gi BinarySI}, newSize {{6442450944 0} {<nil>}  BinarySI}
Sep 20 04:17:27.452: INFO: Error updating pvc csi-hostpathwmcfk with persistentvolumeclaims "csi-hostpathwmcfk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:17:29.898: INFO: Error updating pvc csi-hostpathwmcfk with persistentvolumeclaims "csi-hostpathwmcfk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:17:31.543: INFO: Error updating pvc csi-hostpathwmcfk with persistentvolumeclaims "csi-hostpathwmcfk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:17:33.561: INFO: Error updating pvc csi-hostpathwmcfk with persistentvolumeclaims "csi-hostpathwmcfk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:17:35.532: INFO: Error updating pvc csi-hostpathwmcfk with persistentvolumeclaims "csi-hostpathwmcfk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:17:37.541: INFO: Error updating pvc csi-hostpathwmcfk with persistentvolumeclaims "csi-hostpathwmcfk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:17:39.547: INFO: Error updating pvc csi-hostpathwmcfk with persistentvolumeclaims "csi-hostpathwmcfk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:17:41.558: INFO: Error updating pvc csi-hostpathwmcfk with persistentvolumeclaims "csi-hostpathwmcfk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:17:43.568: INFO: Error updating pvc csi-hostpathwmcfk with persistentvolumeclaims "csi-hostpathwmcfk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:17:45.532: INFO: Error updating pvc csi-hostpathwmcfk with persistentvolumeclaims "csi-hostpathwmcfk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:17:47.534: INFO: Error updating pvc csi-hostpathwmcfk with persistentvolumeclaims "csi-hostpathwmcfk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:17:49.540: INFO: Error updating pvc csi-hostpathwmcfk with persistentvolumeclaims "csi-hostpathwmcfk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:17:51.536: INFO: Error updating pvc csi-hostpathwmcfk with persistentvolumeclaims "csi-hostpathwmcfk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:17:53.530: INFO: Error updating pvc csi-hostpathwmcfk with persistentvolumeclaims "csi-hostpathwmcfk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:17:55.538: INFO: Error updating pvc csi-hostpathwmcfk with persistentvolumeclaims "csi-hostpathwmcfk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:17:57.548: INFO: Error updating pvc csi-hostpathwmcfk with persistentvolumeclaims "csi-hostpathwmcfk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:17:57.630: INFO: Error updating pvc csi-hostpathwmcfk with persistentvolumeclaims "csi-hostpathwmcfk" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Sep 20 04:17:57.630: INFO: Deleting PersistentVolumeClaim "csi-hostpathwmcfk"
Sep 20 04:17:57.678: INFO: Waiting up to 5m0s for PersistentVolume pvc-fef64b29-ce87-4339-99e4-c4189ca1d1e9 to get deleted
Sep 20 04:17:57.717: INFO: PersistentVolume pvc-fef64b29-ce87-4339-99e4-c4189ca1d1e9 found and phase=Released (38.85026ms)
Sep 20 04:18:02.767: INFO: PersistentVolume pvc-fef64b29-ce87-4339-99e4-c4189ca1d1e9 was removed
STEP: Deleting sc
... skipping 268 lines ...
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 20 04:17:32.530: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-7801
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  test/e2e/apps/job.go:110
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:152
Sep 20 04:18:30.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
Sep 20 04:18:40.651: INFO: namespace job-7801 deletion completed in 9.627990864s


• [SLOW TEST:68.121 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  test/e2e/apps/job.go:110
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 20 04:18:20.377: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 1616 lines ...
Sep 20 04:18:54.893: INFO: Got stdout from 104.199.119.116:22: Hello from prow@e2e-4c09d0cdbb-abe28-minion-group-f1c7
STEP: SSH'ing to 1 nodes and running echo "foo" | grep "bar"
STEP: SSH'ing to 1 nodes and running echo "stdout" && echo "stderr" >&2 && exit 7
Sep 20 04:18:55.848: INFO: Got stdout from 35.233.169.40:22: stdout
Sep 20 04:18:55.848: INFO: Got stderr from 35.233.169.40:22: stderr
STEP: SSH'ing to a nonexistent host
error dialing prow@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying
[AfterEach] [k8s.io] [sig-node] SSH
  test/e2e/framework/framework.go:152
Sep 20 04:19:00.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ssh-3421" for this suite.
Sep 20 04:19:09.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 20 04:19:12.924: INFO: namespace ssh-3421 deletion completed in 12.022665892s
... skipping 405 lines ...
Sep 20 04:19:07.619: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:152
Sep 20 04:19:07.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8349" for this suite.
Sep 20 04:19:17.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 20 04:19:18.057: INFO: discovery error for unexpected group: schema.GroupVersion{Group:"crd-publish-openapi-test-common-group.example.com", Version:"v6"}
Sep 20 04:19:18.057: INFO: Error discoverying server preferred namespaced resources: unable to retrieve the complete list of server APIs: crd-publish-openapi-test-common-group.example.com/v6: the server could not find the requested resource, retrying in 2s.
Sep 20 04:19:22.471: INFO: namespace kubectl-8349 deletion completed in 14.806311662s


• [SLOW TEST:33.216 seconds]
[sig-cli] Kubectl client
test/e2e/kubectl/framework.go:23
... skipping 230 lines ...
Sep 20 04:19:08.027: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config explain e2e-test-crd-publish-openapi-4902-crds.spec'
Sep 20 04:19:08.476: INFO: stderr: ""
Sep 20 04:19:08.476: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4902-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Sep 20 04:19:08.476: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config explain e2e-test-crd-publish-openapi-4902-crds.spec.bars'
Sep 20 04:19:08.958: INFO: stderr: ""
Sep 20 04:19:08.958: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4902-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Sep 20 04:19:08.958: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config explain e2e-test-crd-publish-openapi-4902-crds.spec.bars2'
Sep 20 04:19:09.442: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:152
Sep 20 04:19:15.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9274" for this suite.
... skipping 470 lines ...
Sep 20 04:18:53.477: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:18:53.541: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:18:53.804: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:18:53.852: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:18:53.904: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:18:54.029: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:18:54.147: INFO: Lookups using dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2050.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2050.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local jessie_udp@dns-test-service-2.dns-2050.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2050.svc.cluster.local]

Sep 20 04:18:59.326: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:18:59.497: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:18:59.726: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:00.002: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:00.285: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:00.368: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:00.496: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:00.554: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:00.684: INFO: Lookups using dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2050.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2050.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local jessie_udp@dns-test-service-2.dns-2050.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2050.svc.cluster.local]

Sep 20 04:19:04.189: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:04.231: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:04.290: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:04.359: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:04.574: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:04.625: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:04.671: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:04.717: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:04.805: INFO: Lookups using dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2050.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2050.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local jessie_udp@dns-test-service-2.dns-2050.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2050.svc.cluster.local]

Sep 20 04:19:09.216: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:09.362: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:09.511: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:09.677: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:10.088: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:10.150: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:10.220: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:10.326: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:10.450: INFO: Lookups using dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2050.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2050.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local jessie_udp@dns-test-service-2.dns-2050.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2050.svc.cluster.local]

Sep 20 04:19:14.301: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:14.461: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:14.621: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:14.787: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:15.522: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2050.svc.cluster.local from pod dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76: the server could not find the requested resource (get pods dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76)
Sep 20 04:19:15.637: INFO: Lookups using dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2050.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2050.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2050.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2050.svc.cluster.local]

Sep 20 04:19:21.106: INFO: DNS probes using dns-2050/dns-test-1dec2a2c-1f3c-496e-affb-af5f872f1f76 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 402 lines ...
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 20 04:19:28.851: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-7667
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:698
STEP: Creating projection with secret that has name secret-emptykey-test-df5416b1-e691-488a-97ba-85bd0e637f2e
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:152
Sep 20 04:19:29.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7667" for this suite.
Sep 20 04:19:35.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 20 04:19:37.247: INFO: namespace secrets-7667 deletion completed in 7.781348634s


• [SLOW TEST:8.397 seconds]
[sig-api-machinery] Secrets
test/e2e/common/secrets.go:32
  should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:93
... skipping 175 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/common/sysctl.go:63
[It] should not launch unsafe, but not explicitly enabled sysctls on the node
  test/e2e/common/sysctl.go:188
STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node
STEP: Watching for error events or started pod
STEP: Checking that the pod was rejected
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/framework/framework.go:152
Sep 20 04:19:32.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-264" for this suite.
Sep 20 04:19:38.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 1512 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-4273
STEP: Creating statefulset with conflicting port in namespace statefulset-4273
STEP: Waiting until pod test-pod will start running in namespace statefulset-4273
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4273
Sep 20 04:19:41.368: INFO: Observed stateful pod in namespace: statefulset-4273, name: ss-0, uid: 30614680-b48e-41b4-9b93-444ec9bb58d7, status phase: Pending. Waiting for statefulset controller to delete.
Sep 20 04:19:42.545: INFO: Observed stateful pod in namespace: statefulset-4273, name: ss-0, uid: 30614680-b48e-41b4-9b93-444ec9bb58d7, status phase: Failed. Waiting for statefulset controller to delete.
Sep 20 04:19:42.569: INFO: Observed stateful pod in namespace: statefulset-4273, name: ss-0, uid: 30614680-b48e-41b4-9b93-444ec9bb58d7, status phase: Failed. Waiting for statefulset controller to delete.
Sep 20 04:19:42.605: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4273
STEP: Removing pod with conflicting port in namespace statefulset-4273
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4273 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:89
Sep 20 04:19:50.925: INFO: Deleting all statefulset in ns statefulset-4273
... skipping 36 lines ...
Sep 20 04:19:00.115: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-2s5gn] to have phase Bound
Sep 20 04:19:00.184: INFO: PersistentVolumeClaim pvc-2s5gn found but phase is Pending instead of Bound.
Sep 20 04:19:02.352: INFO: PersistentVolumeClaim pvc-2s5gn found and phase=Bound (2.236089647s)
Sep 20 04:19:02.352: INFO: Waiting up to 3m0s for PersistentVolume gce-xw4mw to have phase Bound
Sep 20 04:19:02.485: INFO: PersistentVolume gce-xw4mw found and phase=Bound (133.38755ms)
STEP: Creating the Client Pod
[It] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
  test/e2e/storage/persistent_volumes-gce.go:124
STEP: Deleting the Claim
Sep 20 04:19:27.125: INFO: Deleting PersistentVolumeClaim "pvc-2s5gn"
STEP: Deleting the Pod
Sep 20 04:19:27.383: INFO: Deleting pod "pvc-tester-cls6h" in namespace "pv-2396"
Sep 20 04:19:27.433: INFO: Wait up to 5m0s for pod "pvc-tester-cls6h" to be fully deleted
... skipping 16 lines ...
Sep 20 04:20:15.157: INFO: Successfully deleted PD "e2e-4c09d0cdbb-abe28-c2ff00b1-806f-45f7-a02a-81585fc505a7".


• [SLOW TEST:79.000 seconds]
[sig-storage] PersistentVolumes GCEPD
test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
  test/e2e/storage/persistent_volumes-gce.go:124
------------------------------
[BeforeEach] [sig-auth] PodSecurityPolicy
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 20 04:19:56.312: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 2002 lines ...
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 20 04:20:13.354: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-648
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to exceed backoffLimit
  test/e2e/apps/job.go:226
STEP: Creating a job
STEP: Ensuring job exceed backofflimit
STEP: Checking that 2 pod created and status is failed
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:152
Sep 20 04:20:30.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-648" for this suite.
Sep 20 04:20:42.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 20 04:20:44.299: INFO: namespace job-648 deletion completed in 14.082635906s


• [SLOW TEST:30.945 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should fail to exceed backoffLimit
  test/e2e/apps/job.go:226
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:93
Sep 20 04:20:44.301: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 1701 lines ...
Sep 20 04:20:44.815: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-4045
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:698
STEP: creating the pod
Sep 20 04:20:46.357: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:152
Sep 20 04:20:50.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
Sep 20 04:21:01.306: INFO: namespace init-container-4045 deletion completed in 10.818565999s


• [SLOW TEST:16.491 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:693
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:93
Sep 20 04:21:01.308: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 340 lines ...
STEP: Building a namespace api object, basename node-problem-detector
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in node-problem-detector-7412
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters]
  test/e2e/node/node_problem_detector.go:49
Sep 20 04:17:21.062: INFO: Waiting up to 1m0s for all nodes to be ready
[It] should run without error
  test/e2e/node/node_problem_detector.go:57
STEP: Getting all nodes and their SSH-able IP addresses
STEP: Check node "35.233.169.40:22" has node-problem-detector process
STEP: Check node-problem-detector is running fine on node "35.233.169.40:22"
STEP: Inject log to trigger AUFSUmountHung on node "35.233.169.40:22"
STEP: Check node "34.83.122.145:22" has node-problem-detector process
... skipping 25 lines ...
Sep 20 04:21:11.423: INFO: namespace node-problem-detector-7412 deletion completed in 7.576313766s


• [SLOW TEST:230.784 seconds]
[k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters]
test/e2e/framework/framework.go:693
  should run without error
  test/e2e/node/node_problem_detector.go:57
------------------------------
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 20 04:21:04.325: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 2327 lines ...
STEP: Building a namespace api object, basename container-runtime
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-7844
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:698
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Sep 20 04:21:39.352: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
... skipping 664 lines ...
STEP: creating an object not containing a namespace with in-cluster config
Sep 20 04:21:34.804: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-6485 httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1'
Sep 20 04:21:36.379: INFO: rc: 255
STEP: trying to use kubectl with invalid token
Sep 20 04:21:36.379: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-6485 httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1'
Sep 20 04:21:37.675: INFO: rc: 255
Sep 20 04:21:37.675: INFO: got err error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-6485 httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1] []  <nil> I0920 04:21:37.496765     191 merged_client_builder.go:164] Using in-cluster namespace
I0920 04:21:37.497007     191 merged_client_builder.go:122] Using in-cluster configuration
I0920 04:21:37.504868     191 merged_client_builder.go:122] Using in-cluster configuration
I0920 04:21:37.528405     191 merged_client_builder.go:122] Using in-cluster configuration
I0920 04:21:37.528873     191 round_trippers.go:420] GET https://10.0.0.1:443/api/v1/namespaces/kubectl-6485/pods?limit=500
I0920 04:21:37.528887     191 round_trippers.go:427] Request Headers:
I0920 04:21:37.528895     191 round_trippers.go:431]     Accept: application/json;as=Table;v=v1beta1;g=meta.k8s.io, application/json
... skipping 6 lines ...
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}]
F0920 04:21:37.564816     191 helpers.go:114] error: You must be logged in to the server (Unauthorized)
 + /tmp/kubectl get pods '--token=invalid' '--v=7'
command terminated with exit code 255
 [] <nil> 0xc0018d8f30 exit status 255 <nil> <nil> true [0xc0028fc980 0xc0028fc998 0xc0028fc9b0] [0xc0028fc980 0xc0028fc998 0xc0028fc9b0] [0xc0028fc990 0xc0028fc9a8] [0x10efcb0 0x10efcb0] 0xc0021b9920 <nil>}:
Command stdout:
I0920 04:21:37.496765     191 merged_client_builder.go:164] Using in-cluster namespace
I0920 04:21:37.497007     191 merged_client_builder.go:122] Using in-cluster configuration
... skipping 11 lines ...
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}]
F0920 04:21:37.564816     191 helpers.go:114] error: You must be logged in to the server (Unauthorized)

stderr:
+ /tmp/kubectl get pods '--token=invalid' '--v=7'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid server
Sep 20 04:21:37.675: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-6485 httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1'
Sep 20 04:21:38.722: INFO: rc: 255
Sep 20 04:21:38.722: INFO: got err error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-6485 httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1] []  <nil> I0920 04:21:38.507790     202 merged_client_builder.go:164] Using in-cluster namespace
I0920 04:21:38.538539     202 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 30 milliseconds
I0920 04:21:38.539102     202 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 04:21:38.569970     202 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 29 milliseconds
I0920 04:21:38.570059     202 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 04:21:38.570108     202 shortcut.go:89] Error loading discovery information: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 04:21:38.585252     202 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 14 milliseconds
I0920 04:21:38.585473     202 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 04:21:38.613829     202 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 28 milliseconds
I0920 04:21:38.613948     202 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 04:21:38.623430     202 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 9 milliseconds
I0920 04:21:38.623508     202 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 04:21:38.623545     202 helpers.go:217] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
F0920 04:21:38.623564     202 helpers.go:114] Unable to connect to the server: dial tcp: lookup invalid on 10.0.0.10:53: no such host
 + /tmp/kubectl get pods '--server=invalid' '--v=6'
command terminated with exit code 255
 [] <nil> 0xc0018d9680 exit status 255 <nil> <nil> true [0xc0028fc9b8 0xc0028fc9d0 0xc0028fc9e8] [0xc0028fc9b8 0xc0028fc9d0 0xc0028fc9e8] [0xc0028fc9c8 0xc0028fc9e0] [0x10efcb0 0x10efcb0] 0xc0021b9c20 <nil>}:
Command stdout:
I0920 04:21:38.507790     202 merged_client_builder.go:164] Using in-cluster namespace
I0920 04:21:38.538539     202 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 30 milliseconds
I0920 04:21:38.539102     202 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 04:21:38.569970     202 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 29 milliseconds
I0920 04:21:38.570059     202 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 04:21:38.570108     202 shortcut.go:89] Error loading discovery information: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 04:21:38.585252     202 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 14 milliseconds
I0920 04:21:38.585473     202 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 04:21:38.613829     202 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 28 milliseconds
I0920 04:21:38.613948     202 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 04:21:38.623430     202 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 9 milliseconds
I0920 04:21:38.623508     202 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0920 04:21:38.623545     202 helpers.go:217] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
F0920 04:21:38.623564     202 helpers.go:114] Unable to connect to the server: dial tcp: lookup invalid on 10.0.0.10:53: no such host

stderr:
+ /tmp/kubectl get pods '--server=invalid' '--v=6'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid namespace
Sep 20 04:21:38.723: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-6485 httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1'
Sep 20 04:21:40.033: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n"
Sep 20 04:21:40.034: INFO: stdout: "I0920 04:21:39.887849     214 merged_client_builder.go:122] Using in-cluster configuration\nI0920 04:21:39.892084     214 merged_client_builder.go:122] Using in-cluster configuration\nI0920 04:21:39.898819     214 merged_client_builder.go:122] Using in-cluster configuration\nI0920 04:21:39.913470     214 round_trippers.go:443] GET https://10.0.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 14 milliseconds\nNo resources found in invalid namespace.\n"
Sep 20 04:21:40.034: INFO: stdout: I0920 04:21:39.887849     214 merged_client_builder.go:122] Using in-cluster configuration
... skipping 559 lines ...
Sep 20 04:21:17.827: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec gcepd-client --namespace=volume-4830 -- grep  /opt/0  /proc/mounts'
Sep 20 04:21:20.407: INFO: stderr: ""
Sep 20 04:21:20.407: INFO: stdout: "/dev/sdc /opt/0 ext3 rw,relatime 0 0\n"
STEP: cleaning the environment after gcepd
Sep 20 04:21:20.407: INFO: Deleting pod "gcepd-client" in namespace "volume-4830"
Sep 20 04:21:20.453: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
Sep 20 04:21:33.398: INFO: error deleting PD "e2e-4c09d0cdbb-abe28-b781a015-2d3b-410a-a2e3-44d6c89d9684": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/disks/e2e-4c09d0cdbb-abe28-b781a015-2d3b-410a-a2e3-44d6c89d9684' is already being used by 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/instances/e2e-4c09d0cdbb-abe28-minion-group-96ws', resourceInUseByAnotherResource
Sep 20 04:21:33.398: INFO: Couldn't delete PD "e2e-4c09d0cdbb-abe28-b781a015-2d3b-410a-a2e3-44d6c89d9684", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/disks/e2e-4c09d0cdbb-abe28-b781a015-2d3b-410a-a2e3-44d6c89d9684' is already being used by 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/instances/e2e-4c09d0cdbb-abe28-minion-group-96ws', resourceInUseByAnotherResource
Sep 20 04:21:39.139: INFO: error deleting PD "e2e-4c09d0cdbb-abe28-b781a015-2d3b-410a-a2e3-44d6c89d9684": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/disks/e2e-4c09d0cdbb-abe28-b781a015-2d3b-410a-a2e3-44d6c89d9684' is already being used by 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/instances/e2e-4c09d0cdbb-abe28-minion-group-96ws', resourceInUseByAnotherResource
Sep 20 04:21:39.139: INFO: Couldn't delete PD "e2e-4c09d0cdbb-abe28-b781a015-2d3b-410a-a2e3-44d6c89d9684", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/disks/e2e-4c09d0cdbb-abe28-b781a015-2d3b-410a-a2e3-44d6c89d9684' is already being used by 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/instances/e2e-4c09d0cdbb-abe28-minion-group-96ws', resourceInUseByAnotherResource
Sep 20 04:21:45.399: INFO: error deleting PD "e2e-4c09d0cdbb-abe28-b781a015-2d3b-410a-a2e3-44d6c89d9684": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/disks/e2e-4c09d0cdbb-abe28-b781a015-2d3b-410a-a2e3-44d6c89d9684' is already being used by 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/instances/e2e-4c09d0cdbb-abe28-minion-group-96ws', resourceInUseByAnotherResource
Sep 20 04:21:45.399: INFO: Couldn't delete PD "e2e-4c09d0cdbb-abe28-b781a015-2d3b-410a-a2e3-44d6c89d9684", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/disks/e2e-4c09d0cdbb-abe28-b781a015-2d3b-410a-a2e3-44d6c89d9684' is already being used by 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/instances/e2e-4c09d0cdbb-abe28-minion-group-96ws', resourceInUseByAnotherResource
Sep 20 04:21:52.783: INFO: Successfully deleted PD "e2e-4c09d0cdbb-abe28-b781a015-2d3b-410a-a2e3-44d6c89d9684".
Sep 20 04:21:52.783: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/framework/framework.go:152
Sep 20 04:21:52.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4830" for this suite.
... skipping 336 lines ...
Sep 20 04:21:46.610: INFO: Pod exec-volume-test-gcepd-preprovisionedpv-bv4h no longer exists
STEP: Deleting pod exec-volume-test-gcepd-preprovisionedpv-bv4h
Sep 20 04:21:46.611: INFO: Deleting pod "exec-volume-test-gcepd-preprovisionedpv-bv4h" in namespace "volume-2109"
STEP: Deleting pv and pvc
Sep 20 04:21:46.648: INFO: Deleting PersistentVolumeClaim "pvc-jn94h"
Sep 20 04:21:46.695: INFO: Deleting PersistentVolume "gcepd-lcbl7"
Sep 20 04:21:48.190: INFO: error deleting PD "e2e-4c09d0cdbb-abe28-a2e33e7e-f6ad-4cbf-8156-5a77d0c250ba": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/disks/e2e-4c09d0cdbb-abe28-a2e33e7e-f6ad-4cbf-8156-5a77d0c250ba' is already being used by 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/instances/e2e-4c09d0cdbb-abe28-minion-group-f1c7', resourceInUseByAnotherResource
Sep 20 04:21:48.190: INFO: Couldn't delete PD "e2e-4c09d0cdbb-abe28-a2e33e7e-f6ad-4cbf-8156-5a77d0c250ba", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/disks/e2e-4c09d0cdbb-abe28-a2e33e7e-f6ad-4cbf-8156-5a77d0c250ba' is already being used by 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/instances/e2e-4c09d0cdbb-abe28-minion-group-f1c7', resourceInUseByAnotherResource
Sep 20 04:21:55.629: INFO: Successfully deleted PD "e2e-4c09d0cdbb-abe28-a2e33e7e-f6ad-4cbf-8156-5a77d0c250ba".
Sep 20 04:21:55.629: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/framework/framework.go:152
Sep 20 04:21:55.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-2109" for this suite.
... skipping 181 lines ...
Sep 20 04:21:23.564: INFO: Waiting for PV gce-ph89r to bind to PVC pvc-mhjtp
Sep 20 04:21:23.564: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-mhjtp] to have phase Bound
Sep 20 04:21:23.624: INFO: PersistentVolumeClaim pvc-mhjtp found and phase=Bound (59.955672ms)
Sep 20 04:21:23.624: INFO: Waiting up to 3m0s for PersistentVolume gce-ph89r to have phase Bound
Sep 20 04:21:23.660: INFO: PersistentVolume gce-ph89r found and phase=Bound (36.195415ms)
STEP: Creating the Client Pod
[It] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach
  test/e2e/storage/persistent_volumes-gce.go:139
STEP: Deleting the Persistent Volume
Sep 20 04:21:35.902: INFO: Deleting PersistentVolume "gce-ph89r"
STEP: Deleting the client pod
Sep 20 04:21:36.157: INFO: Deleting pod "pvc-tester-bsx7n" in namespace "pv-3794"
Sep 20 04:21:36.195: INFO: Wait up to 5m0s for pod "pvc-tester-bsx7n" to be fully deleted
... skipping 15 lines ...
Sep 20 04:22:04.687: INFO: Successfully deleted PD "e2e-4c09d0cdbb-abe28-2d6f36a4-549f-4a92-91d9-0f51508dd2a7".


• [SLOW TEST:44.315 seconds]
[sig-storage] PersistentVolumes GCEPD
test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach
  test/e2e/storage/persistent_volumes-gce.go:139
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  test/e2e/storage/testsuites/base.go:93
Sep 20 04:22:04.690: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
... skipping 84 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/common/sysctl.go:63
[It] should support unsafe sysctls which are actually whitelisted
  test/e2e/common/sysctl.go:110
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/framework/framework.go:152
... skipping 2351 lines ...
Sep 20 04:21:49.500: INFO: PersistentVolumeClaim csi-hostpathgm422 found but phase is Pending instead of Bound.
Sep 20 04:21:51.546: INFO: PersistentVolumeClaim csi-hostpathgm422 found but phase is Pending instead of Bound.
Sep 20 04:21:53.587: INFO: PersistentVolumeClaim csi-hostpathgm422 found but phase is Pending instead of Bound.
Sep 20 04:21:55.628: INFO: PersistentVolumeClaim csi-hostpathgm422 found and phase=Bound (6.166899512s)
STEP: Expanding non-expandable pvc
Sep 20 04:21:55.708: INFO: currentPvcSize {{5368709120 0} {<nil>} 5Gi BinarySI}, newSize {{6442450944 0} {<nil>}  BinarySI}
Sep 20 04:21:55.782: INFO: Error updating pvc csi-hostpathgm422 with persistentvolumeclaims "csi-hostpathgm422" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:21:57.857: INFO: Error updating pvc csi-hostpathgm422 with persistentvolumeclaims "csi-hostpathgm422" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:21:59.856: INFO: Error updating pvc csi-hostpathgm422 with persistentvolumeclaims "csi-hostpathgm422" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:22:01.858: INFO: Error updating pvc csi-hostpathgm422 with persistentvolumeclaims "csi-hostpathgm422" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:22:03.859: INFO: Error updating pvc csi-hostpathgm422 with persistentvolumeclaims "csi-hostpathgm422" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:22:05.860: INFO: Error updating pvc csi-hostpathgm422 with persistentvolumeclaims "csi-hostpathgm422" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:22:07.856: INFO: Error updating pvc csi-hostpathgm422 with persistentvolumeclaims "csi-hostpathgm422" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:22:09.867: INFO: Error updating pvc csi-hostpathgm422 with persistentvolumeclaims "csi-hostpathgm422" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:22:11.853: INFO: Error updating pvc csi-hostpathgm422 with persistentvolumeclaims "csi-hostpathgm422" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:22:13.855: INFO: Error updating pvc csi-hostpathgm422 with persistentvolumeclaims "csi-hostpathgm422" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:22:15.867: INFO: Error updating pvc csi-hostpathgm422 with persistentvolumeclaims "csi-hostpathgm422" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:22:17.862: INFO: Error updating pvc csi-hostpathgm422 with persistentvolumeclaims "csi-hostpathgm422" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:22:19.887: INFO: Error updating pvc csi-hostpathgm422 with persistentvolumeclaims "csi-hostpathgm422" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:22:21.853: INFO: Error updating pvc csi-hostpathgm422 with persistentvolumeclaims "csi-hostpathgm422" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:22:23.866: INFO: Error updating pvc csi-hostpathgm422 with persistentvolumeclaims "csi-hostpathgm422" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:22:25.968: INFO: Error updating pvc csi-hostpathgm422 with persistentvolumeclaims "csi-hostpathgm422" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 20 04:22:26.058: INFO: Error updating pvc csi-hostpathgm422 with persistentvolumeclaims "csi-hostpathgm422" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Sep 20 04:22:26.058: INFO: Deleting PersistentVolumeClaim "csi-hostpathgm422"
Sep 20 04:22:26.096: INFO: Waiting up to 5m0s for PersistentVolume pvc-267a2e10-5818-489e-81c1-ad4ae9233745 to get deleted
Sep 20 04:22:26.134: INFO: PersistentVolume pvc-267a2e10-5818-489e-81c1-ad4ae9233745 found and phase=Released (38.274967ms)
Sep 20 04:22:31.174: INFO: PersistentVolume pvc-267a2e10-5818-489e-81c1-ad4ae9233745 was removed
STEP: Deleting sc
... skipping 604 lines ...
Sep 20 04:22:20.005: INFO: Unable to read jessie_udp@dns-test-service.dns-2049 from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:20.060: INFO: Unable to read jessie_tcp@dns-test-service.dns-2049 from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:20.105: INFO: Unable to read jessie_udp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:20.147: INFO: Unable to read jessie_tcp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:20.191: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:20.254: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:20.610: INFO: Lookups using dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2049 wheezy_tcp@dns-test-service.dns-2049 wheezy_udp@dns-test-service.dns-2049.svc wheezy_tcp@dns-test-service.dns-2049.svc wheezy_udp@_http._tcp.dns-test-service.dns-2049.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2049.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2049 jessie_tcp@dns-test-service.dns-2049 jessie_udp@dns-test-service.dns-2049.svc jessie_tcp@dns-test-service.dns-2049.svc jessie_udp@_http._tcp.dns-test-service.dns-2049.svc jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc]

Sep 20 04:22:25.655: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:25.698: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:25.788: INFO: Unable to read wheezy_udp@dns-test-service.dns-2049 from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:25.927: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2049 from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:26.018: INFO: Unable to read wheezy_udp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
... skipping 5 lines ...
Sep 20 04:22:26.609: INFO: Unable to read jessie_udp@dns-test-service.dns-2049 from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:26.668: INFO: Unable to read jessie_tcp@dns-test-service.dns-2049 from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:26.717: INFO: Unable to read jessie_udp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:26.760: INFO: Unable to read jessie_tcp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:26.805: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:26.858: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:27.195: INFO: Lookups using dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2049 wheezy_tcp@dns-test-service.dns-2049 wheezy_udp@dns-test-service.dns-2049.svc wheezy_tcp@dns-test-service.dns-2049.svc wheezy_udp@_http._tcp.dns-test-service.dns-2049.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2049.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2049 jessie_tcp@dns-test-service.dns-2049 jessie_udp@dns-test-service.dns-2049.svc jessie_tcp@dns-test-service.dns-2049.svc jessie_udp@_http._tcp.dns-test-service.dns-2049.svc jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc]

Sep 20 04:22:30.651: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:30.696: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:30.761: INFO: Unable to read wheezy_udp@dns-test-service.dns-2049 from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:30.811: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2049 from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:30.864: INFO: Unable to read wheezy_udp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
... skipping 5 lines ...
Sep 20 04:22:31.803: INFO: Unable to read jessie_udp@dns-test-service.dns-2049 from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:31.850: INFO: Unable to read jessie_tcp@dns-test-service.dns-2049 from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:31.892: INFO: Unable to read jessie_udp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:31.932: INFO: Unable to read jessie_tcp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:31.986: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:32.029: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:32.333: INFO: Lookups using dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2049 wheezy_tcp@dns-test-service.dns-2049 wheezy_udp@dns-test-service.dns-2049.svc wheezy_tcp@dns-test-service.dns-2049.svc wheezy_udp@_http._tcp.dns-test-service.dns-2049.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2049.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2049 jessie_tcp@dns-test-service.dns-2049 jessie_udp@dns-test-service.dns-2049.svc jessie_tcp@dns-test-service.dns-2049.svc jessie_udp@_http._tcp.dns-test-service.dns-2049.svc jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc]

Sep 20 04:22:35.654: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:35.697: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:35.737: INFO: Unable to read wheezy_udp@dns-test-service.dns-2049 from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:35.777: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2049 from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:35.825: INFO: Unable to read wheezy_udp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
... skipping 5 lines ...
Sep 20 04:22:36.509: INFO: Unable to read jessie_udp@dns-test-service.dns-2049 from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:36.549: INFO: Unable to read jessie_tcp@dns-test-service.dns-2049 from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:36.605: INFO: Unable to read jessie_udp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:36.651: INFO: Unable to read jessie_tcp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:36.701: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:36.753: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:37.235: INFO: Lookups using dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2049 wheezy_tcp@dns-test-service.dns-2049 wheezy_udp@dns-test-service.dns-2049.svc wheezy_tcp@dns-test-service.dns-2049.svc wheezy_udp@_http._tcp.dns-test-service.dns-2049.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2049.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2049 jessie_tcp@dns-test-service.dns-2049 jessie_udp@dns-test-service.dns-2049.svc jessie_tcp@dns-test-service.dns-2049.svc jessie_udp@_http._tcp.dns-test-service.dns-2049.svc jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc]

Sep 20 04:22:40.654: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:40.701: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:40.749: INFO: Unable to read wheezy_udp@dns-test-service.dns-2049 from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:40.797: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2049 from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:40.854: INFO: Unable to read wheezy_udp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
... skipping 5 lines ...
Sep 20 04:22:41.617: INFO: Unable to read jessie_udp@dns-test-service.dns-2049 from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:41.665: INFO: Unable to read jessie_tcp@dns-test-service.dns-2049 from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:41.725: INFO: Unable to read jessie_udp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:41.773: INFO: Unable to read jessie_tcp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:41.828: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:41.890: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714: the server could not find the requested resource (get pods dns-test-d19c3c88-90f0-4d61-b817-57b58d859714)
Sep 20 04:22:42.256: INFO: Lookups using dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2049 wheezy_tcp@dns-test-service.dns-2049 wheezy_udp@dns-test-service.dns-2049.svc wheezy_tcp@dns-test-service.dns-2049.svc wheezy_udp@_http._tcp.dns-test-service.dns-2049.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2049.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2049 jessie_tcp@dns-test-service.dns-2049 jessie_udp@dns-test-service.dns-2049.svc jessie_tcp@dns-test-service.dns-2049.svc jessie_udp@_http._tcp.dns-test-service.dns-2049.svc jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc]

Sep 20 04:22:47.253: INFO: DNS probes using dns-2049/dns-test-d19c3c88-90f0-4d61-b817-57b58d859714 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 1102 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/common/sysctl.go:63
[It] should support sysctls
  test/e2e/common/sysctl.go:67
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/framework/framework.go:152
... skipping 108 lines ...
Sep 20 04:23:20.837: INFO: AfterEach: Cleaning up test resources


S [SKIPPING] in Spec Setup (BeforeEach) [8.048 seconds]
[sig-storage] PersistentVolumes:vsphere
test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on vspehre volume detach [BeforeEach]
  test/e2e/storage/vsphere/persistent_volumes-vsphere.go:167

  Only supported for providers [vsphere] (not gce)

  test/e2e/storage/vsphere/persistent_volumes-vsphere.go:63
------------------------------
... skipping 118 lines ...
Sep 20 04:23:08.205: INFO: Trying to get logs from node e2e-4c09d0cdbb-abe28-minion-group-1kz0 pod exec-volume-test-gcepd-f2vv container exec-container-gcepd-f2vv: <nil>
STEP: delete the pod
Sep 20 04:23:08.409: INFO: Waiting for pod exec-volume-test-gcepd-f2vv to disappear
Sep 20 04:23:08.456: INFO: Pod exec-volume-test-gcepd-f2vv no longer exists
STEP: Deleting pod exec-volume-test-gcepd-f2vv
Sep 20 04:23:08.456: INFO: Deleting pod "exec-volume-test-gcepd-f2vv" in namespace "volume-3575"
Sep 20 04:23:09.565: INFO: error deleting PD "e2e-4c09d0cdbb-abe28-c7abb2ba-b01a-4c36-9614-4db9b8feb5d2": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/disks/e2e-4c09d0cdbb-abe28-c7abb2ba-b01a-4c36-9614-4db9b8feb5d2' is already being used by 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/instances/e2e-4c09d0cdbb-abe28-minion-group-1kz0', resourceInUseByAnotherResource
Sep 20 04:23:09.565: INFO: Couldn't delete PD "e2e-4c09d0cdbb-abe28-c7abb2ba-b01a-4c36-9614-4db9b8feb5d2", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/disks/e2e-4c09d0cdbb-abe28-c7abb2ba-b01a-4c36-9614-4db9b8feb5d2' is already being used by 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/instances/e2e-4c09d0cdbb-abe28-minion-group-1kz0', resourceInUseByAnotherResource
Sep 20 04:23:17.108: INFO: Successfully deleted PD "e2e-4c09d0cdbb-abe28-c7abb2ba-b01a-4c36-9614-4db9b8feb5d2".
Sep 20 04:23:17.108: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/framework/framework.go:152
Sep 20 04:23:17.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-3575" for this suite.
... skipping 670 lines ...
Sep 20 04:21:49.749: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Sep 20 04:21:49.749: INFO: Waiting for all frontend pods to be Running.
Sep 20 04:22:54.805: INFO: Waiting for frontend to serve content.
Sep 20 04:22:54.864: INFO: Trying to add a new entry to the guestbook.
Sep 20 04:22:54.935: INFO: Verifying that added entry can be retrieved.
Sep 20 04:22:55.001: INFO: Failed to get response from guestbook. err: <nil>, response: {"data": ""}
STEP: using delete to clean up resources
Sep 20 04:23:00.050: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8883'
Sep 20 04:23:00.368: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 20 04:23:00.369: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Sep 20 04:23:00.369: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8883'
... skipping 285 lines ...
STEP: Checking that volume plugin kubernetes.io/gce-pd is not used in pod directory
Sep 20 04:23:12.253: INFO: Deleting pod "security-context-17ace1e4-22e8-421b-a163-15e54f1bca40" in namespace "volumemode-752"
Sep 20 04:23:12.297: INFO: Wait up to 5m0s for pod "security-context-17ace1e4-22e8-421b-a163-15e54f1bca40" to be fully deleted
STEP: Deleting pv and pvc
Sep 20 04:23:20.384: INFO: Deleting PersistentVolumeClaim "pvc-nhb67"
Sep 20 04:23:20.446: INFO: Deleting PersistentVolume "gcepd-z5ld8"
Sep 20 04:23:21.841: INFO: error deleting PD "e2e-4c09d0cdbb-abe28-b71adc7b-0d19-45d2-bcc7-85eed10e05c4": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/disks/e2e-4c09d0cdbb-abe28-b71adc7b-0d19-45d2-bcc7-85eed10e05c4' is already being used by 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/instances/e2e-4c09d0cdbb-abe28-minion-group-1kz0', resourceInUseByAnotherResource
Sep 20 04:23:21.841: INFO: Couldn't delete PD "e2e-4c09d0cdbb-abe28-b71adc7b-0d19-45d2-bcc7-85eed10e05c4", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/disks/e2e-4c09d0cdbb-abe28-b71adc7b-0d19-45d2-bcc7-85eed10e05c4' is already being used by 'projects/k8s-jkns-gce-reboot-1-6/zones/us-west1-b/instances/e2e-4c09d0cdbb-abe28-minion-group-1kz0', resourceInUseByAnotherResource
Sep 20 04:23:29.214: INFO: Successfully deleted PD "e2e-4c09d0cdbb-abe28-b71adc7b-0d19-45d2-bcc7-85eed10e05c4".
Sep 20 04:23:29.214: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/framework/framework.go:152
Sep 20 04:23:29.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volumemode-752" for this suite.
... skipping 410 lines ...
STEP: Scaling down replication controller to zero
STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-7149 to 0
STEP: Update service to not tolerate unready services
STEP: Check if pod is unreachable
Sep 20 04:23:17.089: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec --namespace=services-7149 execpod-rt2cv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/; test "$?" -ne "0"'
Sep 20 04:23:18.198: INFO: rc: 1
Sep 20 04:23:18.198: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec --namespace=services-7149 execpod-rt2cv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/; test "$?" -ne "0"] []  <nil> NOW: 2019-09-20 04:23:18.014692365 +0000 UTC m=+16.321427081 + curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1
 [] <nil> 0xc002002d50 exit status 1 <nil> <nil> true [0xc002336d90 0xc002336da8 0xc002336dc0] [0xc002336d90 0xc002336da8 0xc002336dc0] [0xc002336da0 0xc002336db8] [0x10efcb0 0x10efcb0] 0xc002482480 <nil>}:
Command stdout:
NOW: 2019-09-20 04:23:18.014692365 +0000 UTC m=+16.321427081
stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1

error:
exit status 1
Sep 20 04:23:20.199: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec --namespace=services-7149 execpod-rt2cv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/; test "$?" -ne "0"'
Sep 20 04:23:21.253: INFO: rc: 1
Sep 20 04:23:21.253: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec --namespace=services-7149 execpod-rt2cv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/; test "$?" -ne "0"] []  <nil> NOW: 2019-09-20 04:23:21.12760696 +0000 UTC m=+19.434341667 + curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1
 [] <nil> 0xc001fd3170 exit status 1 <nil> <nil> true [0xc001d443e0 0xc001d443f8 0xc001d44410] [0xc001d443e0 0xc001d443f8 0xc001d44410] [0xc001d443f0 0xc001d44408] [0x10efcb0 0x10efcb0] 0xc0025a61e0 <nil>}:
Command stdout:
NOW: 2019-09-20 04:23:21.12760696 +0000 UTC m=+19.434341667
stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1

error:
exit status 1
Sep 20 04:23:22.199: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec --namespace=services-7149 execpod-rt2cv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/; test "$?" -ne "0"'
Sep 20 04:23:23.168: INFO: rc: 1
Sep 20 04:23:23.168: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec --namespace=services-7149 execpod-rt2cv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/; test "$?" -ne "0"] []  <nil> NOW: 2019-09-20 04:23:23.041873634 +0000 UTC m=+21.348608344 + curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1
 [] <nil> 0xc001e82d20 exit status 1 <nil> <nil> true [0xc0028fc968 0xc0028fc980 0xc0028fc998] [0xc0028fc968 0xc0028fc980 0xc0028fc998] [0xc0028fc978 0xc0028fc990] [0x10efcb0 0x10efcb0] 0xc001814300 <nil>}:
Command stdout:
NOW: 2019-09-20 04:23:23.041873634 +0000 UTC m=+21.348608344
stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1

error:
exit status 1
Sep 20 04:23:24.199: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec --namespace=services-7149 execpod-rt2cv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/; test "$?" -ne "0"'
Sep 20 04:23:26.087: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/\n+ test 7 -ne 0\n"
Sep 20 04:23:26.088: INFO: stdout: ""
STEP: Update service to tolerate unready services again
STEP: Check if terminating pod is available through service
Sep 20 04:23:26.172: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec --namespace=services-7149 execpod-rt2cv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/'
Sep 20 04:23:28.121: INFO: rc: 7
Sep 20 04:23:28.121: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec --namespace=services-7149 execpod-rt2cv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/] []  <nil>  + curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/
command terminated with exit code 7
 [] <nil> 0xc002e0b0b0 exit status 7 <nil> <nil> true [0xc001f94480 0xc001f944a0 0xc001f944b8] [0xc001f94480 0xc001f944a0 0xc001f944b8] [0xc001f94498 0xc001f944b0] [0x10efcb0 0x10efcb0] 0xc001f18a20 <nil>}:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/
command terminated with exit code 7

error:
exit status 7
Sep 20 04:23:30.122: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec --namespace=services-7149 execpod-rt2cv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/'
Sep 20 04:23:31.893: INFO: rc: 7
Sep 20 04:23:31.893: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec --namespace=services-7149 execpod-rt2cv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/] []  <nil>  + curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/
command terminated with exit code 7
 [] <nil> 0xc001fd3ad0 exit status 7 <nil> <nil> true [0xc001d44440 0xc001d44458 0xc001d44470] [0xc001d44440 0xc001d44458 0xc001d44470] [0xc001d44450 0xc001d44468] [0x10efcb0 0x10efcb0] 0xc0025a65a0 <nil>}:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/
command terminated with exit code 7

error:
exit status 7
Sep 20 04:23:32.122: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec --namespace=services-7149 execpod-rt2cv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/'
Sep 20 04:23:34.651: INFO: rc: 7
Sep 20 04:23:34.651: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec --namespace=services-7149 execpod-rt2cv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/] []  <nil>  + curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/
command terminated with exit code 7
 [] <nil> 0xc001d4e1e0 exit status 7 <nil> <nil> true [0xc001d44478 0xc001d44490 0xc001d444a8] [0xc001d44478 0xc001d44490 0xc001d444a8] [0xc001d44488 0xc001d444a0] [0x10efcb0 0x10efcb0] 0xc0025a68a0 <nil>}:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/
command terminated with exit code 7

error:
exit status 7
Sep 20 04:23:36.122: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.200.78 --kubeconfig=/workspace/.kube/config exec --namespace=services-7149 execpod-rt2cv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/'
Sep 20 04:23:37.705: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7149.svc.cluster.local:80/\n"
Sep 20 04:23:37.705: INFO: stdout: "NOW: 2019-09-20 04:23:37.348731914 +0000 UTC m=+35.655466623"
STEP: Remove pods immediately
STEP: stopping RC slow-terminating-unready-pod in namespace services-7149
... skipping 4936 lines ...
STEP: Creating the service on top of the pods in kubernetes
Sep 20 04:24:20.419: INFO: Service node-port-service in namespace nettest-227 found.
Sep 20 04:24:20.547: INFO: Service session-affinity-service in namespace nettest-227 found.
STEP: dialing(udp) test-container-pod --> 10.0.247.95:90
Sep 20 04:24:20.633: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.3.222:8080/dial?request=hostName&protocol=udp&host=10.0.247.95&port=90&tries=1'] Namespace:nettest-227 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:24:20.633: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:24:27.365: INFO: Tries: 10, in try: 0, stdout: {"errors":["reading from udp connection failed. err:'read udp 10.64.3.222:54160-\u003e10.0.247.95:90: i/o timeout'"]}, stderr: , command run in: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"host-test-container-pod", GenerateName:"", Namespace:"nettest-227", SelfLink:"/api/v1/namespaces/nettest-227/pods/host-test-container-pod", UID:"038a1af8-15e0-4d64-87da-ea794b17c0bb", ResourceVersion:"18507", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63704550242, loc:(*time.Location)(0x846e1e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"kubernetes.io/psp":"e2e-test-privileged-psp"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-2xmd8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002e63640), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"agnhost", Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2xmd8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0018ed128), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"e2e-4c09d0cdbb-abe28-minion-group-96ws", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0024e4600), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0018ed160)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0018ed180)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0018ed188), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0018ed18c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550242, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550248, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550248, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550242, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.40.0.5", PodIP:"10.40.0.5", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.40.0.5"}}, StartTime:(*v1.Time)(0xc002ac1ba0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"agnhost", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002ac1bc0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"gcr.io/k8s-authenticated-test/agnhost:2.6", ImageID:"docker-pullable://gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727", ContainerID:"docker://0318e997c44eb61193d11c1e56a5eb34a9e20a4dd80fd20c949bbfa16694462a", Started:(*bool)(0xc0018ed220)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
Sep 20 04:24:29.410: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.3.222:8080/dial?request=hostName&protocol=udp&host=10.0.247.95&port=90&tries=1'] Namespace:nettest-227 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:24:29.410: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:24:31.138: INFO: Tries: 10, in try: 1, stdout: {"responses":["netserver-0"]}, stderr: , command run in: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"host-test-container-pod", GenerateName:"", Namespace:"nettest-227", SelfLink:"/api/v1/namespaces/nettest-227/pods/host-test-container-pod", UID:"038a1af8-15e0-4d64-87da-ea794b17c0bb", ResourceVersion:"18507", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63704550242, loc:(*time.Location)(0x846e1e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"kubernetes.io/psp":"e2e-test-privileged-psp"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-2xmd8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002e63640), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"agnhost", Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2xmd8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0018ed128), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"e2e-4c09d0cdbb-abe28-minion-group-96ws", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0024e4600), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0018ed160)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0018ed180)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0018ed188), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0018ed18c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550242, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550248, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550248, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550242, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.40.0.5", PodIP:"10.40.0.5", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.40.0.5"}}, StartTime:(*v1.Time)(0xc002ac1ba0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"agnhost", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002ac1bc0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"gcr.io/k8s-authenticated-test/agnhost:2.6", ImageID:"docker-pullable://gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727", ContainerID:"docker://0318e997c44eb61193d11c1e56a5eb34a9e20a4dd80fd20c949bbfa16694462a", Started:(*bool)(0xc0018ed220)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
Sep 20 04:24:33.180: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.3.222:8080/dial?request=hostName&protocol=udp&host=10.0.247.95&port=90&tries=1'] Namespace:nettest-227 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 20 04:24:33.180: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 20 04:24:34.397: INFO: Tries: 10, in try: 2, stdout: {"responses":["netserver-0"]}, stderr: , command run in: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"host-test-container-pod", GenerateName:"", Namespace:"nettest-227", SelfLink:"/api/v1/namespaces/nettest-227/pods/host-test-container-pod", UID:"038a1af8-15e0-4d64-87da-ea794b17c0bb", ResourceVersion:"18507", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63704550242, loc:(*time.Location)(0x846e1e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"kubernetes.io/psp":"e2e-test-privileged-psp"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-2xmd8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002e63640), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"agnhost", Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2xmd8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0018ed128), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"e2e-4c09d0cdbb-abe28-minion-group-96ws", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0024e4600), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0018ed160)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0018ed180)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0018ed188), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0018ed18c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550242, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550248, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550248, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550242, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.40.0.5", PodIP:"10.40.0.5", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.40.0.5"}}, StartTime:(*v1.Time)(0xc002ac1ba0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"agnhost", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002ac1bc0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"gcr.io/k8s-authenticated-test/agnhost:2.6", ImageID:"docker-pullable://gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727", ContainerID:"docker://0318e997c44eb61193d11c1e56a5eb34a9e20a4dd80fd20c949bbfa16694462a", Started:(*bool)(0xc0018ed220)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
... skipping 2162 lines ...
Sep 20 04:24:13.211: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-7816
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:698
STEP: creating the pod
Sep 20 04:24:13.536: INFO: PodSpec: initContainers in spec.initContainers
Sep 20 04:25:07.042: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-efd702d9-a636-4a2b-acd0-321c6d92ad33", GenerateName:"", Namespace:"init-container-7816", SelfLink:"/api/v1/namespaces/init-container-7816/pods/pod-init-efd702d9-a636-4a2b-acd0-321c6d92ad33", UID:"8df1a01f-dec6-4f13-82f3-a7f24e7e655c", ResourceVersion:"20226", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63704550253, loc:(*time.Location)(0x846e1e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"536755516"}, Annotations:map[string]string{"kubernetes.io/psp":"e2e-test-privileged-psp"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-gxnbs", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001e5bd80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gxnbs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gxnbs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gxnbs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001cade48), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"e2e-4c09d0cdbb-abe28-minion-group-f1c7", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002b4e600), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001cadec0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001cadee0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001cadee8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001cadeec), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550253, loc:(*time.Location)(0x846e1e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550253, loc:(*time.Location)(0x846e1e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550253, loc:(*time.Location)(0x846e1e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550253, loc:(*time.Location)(0x846e1e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.40.0.3", PodIP:"10.64.2.209", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.64.2.209"}}, StartTime:(*v1.Time)(0xc0023f2700), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0020b43f0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0020b4460)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"docker://0dfee2d3da45ec34fc93ec71ab51cd6554e15ae30b68f83ffbcb8f1b1b671225", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0023f2760), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0023f2720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc001cadf9f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:152
Sep 20 04:25:07.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7816" for this suite.
Sep 20 04:25:37.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 20 04:25:38.724: INFO: namespace init-container-7816 deletion completed in 31.634819492s


• [SLOW TEST:85.513 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:693
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/storage/testsuites/base.go:93
... skipping 1198 lines ...
STEP: Deleting the previously created pod
Sep 20 04:25:42.013: INFO: Deleting pod "pvc-volume-tester-qrm2p" in namespace "csi-mock-volumes-8024"
Sep 20 04:25:42.055: INFO: Wait up to 5m0s for pod "pvc-volume-tester-qrm2p" to be fully deleted
STEP: Checking CSI driver logs
Sep 20 04:25:46.194: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8024","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8024","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8024","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-e27d3342-0e5b-4889-9509-157a31787b56","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-e27d3342-0e5b-4889-9509-157a31787b56"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-8024","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-8024","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-e27d3342-0e5b-4889-9509-157a31787b56","storage.kubernetes.io/csiProvisionerIdentity":"1568953527902-8081-csi-mock-csi-mock-volumes-8024"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e27d3342-0e5b-4889-9509-157a31787b56/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-e27d3342-0e5b-4889-9509-157a31787b56","storage.kubernetes.io/csiProvisionerIdentity":"1568953527902-8081-csi-mock-csi-mock-volumes-8024"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e27d3342-0e5b-4889-9509-157a31787b56/globalmount","target_path":"/var/lib/kubelet/pods/5e7826d5-b319-4a9b-ba64-d2af94ebd58b/volumes/kubernetes.io~csi/pvc-e27d3342-0e5b-4889-9509-157a31787b56/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-e27d3342-0e5b-4889-9509-157a31787b56","storage.kubernetes.io/csiProvisionerIdentity":"1568953527902-8081-csi-mock-csi-mock-volumes-8024"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/5e7826d5-b319-4a9b-ba64-d2af94ebd58b/volumes/kubernetes.io~csi/pvc-e27d3342-0e5b-4889-9509-157a31787b56/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e27d3342-0e5b-4889-9509-157a31787b56/globalmount"},"Response":{},"Error":""}

Sep 20 04:25:46.194: INFO: Found NodeUnpublishVolume: {Method:/csi.v1.Node/NodeUnpublishVolume Request:{VolumeContext:map[]}}
STEP: Deleting pod pvc-volume-tester-qrm2p
Sep 20 04:25:46.194: INFO: Deleting pod "pvc-volume-tester-qrm2p" in namespace "csi-mock-volumes-8024"
STEP: Deleting claim pvc-kbr7b
Sep 20 04:25:46.328: INFO: Waiting up to 2m0s for PersistentVolume pvc-e27d3342-0e5b-4889-9509-157a31787b56 to get deleted
... skipping 637 lines ...
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 20 04:26:04.465: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-1479
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail when exceeds active deadline
  test/e2e/apps/job.go:130
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:152
Sep 20 04:26:07.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
Sep 20 04:26:15.479: INFO: namespace job-1479 deletion completed in 8.105042095s


• [SLOW TEST:11.014 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should fail when exceeds active deadline
  test/e2e/apps/job.go:130
------------------------------
[BeforeEach] [sig-api-machinery] Discovery
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 20 04:26:05.198: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 1802 lines ...
Sep 20 04:26:27.995: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550379, loc:(*time.Location)(0x846e1e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550379, loc:(*time.Location)(0x846e1e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550379, loc:(*time.Location)(0x846e1e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550379, loc:(*time.Location)(0x846e1e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 20 04:26:29.969: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550379, loc:(*time.Location)(0x846e1e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550379, loc:(*time.Location)(0x846e1e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550379, loc:(*time.Location)(0x846e1e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550379, loc:(*time.Location)(0x846e1e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 20 04:26:32.000: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550379, loc:(*time.Location)(0x846e1e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550379, loc:(*time.Location)(0x846e1e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550379, loc:(*time.Location)(0x846e1e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63704550379, loc:(*time.Location)(0x846e1e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Sep 20 04:26:35.296: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:698
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:152
Sep 20 04:26:35.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-242" for this suite.
... skipping 6 lines ...
  test/e2e/apimachinery/webhook.go:103


• [SLOW TEST:40.896 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:93
... skipping 25 lines ...
Sep 20 04:26:52.311: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename pod-disks
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-disks-8009
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Pod Disks
  test/e2e/storage/pd.go:71
[It] should be able to delete a non-existent PD without error
  test/e2e/storage/pd.go:435
STEP: delete a PD
W0920 04:26:53.630446   12916 gce_disks.go:972] GCE persistent disk "non-exist" not found in managed zones (us-west1-b)
Sep 20 04:26:53.630: INFO: Successfully deleted PD "non-exist".
[AfterEach] [sig-storage] Pod Disks
  test/e2e/framework/framework.go:152
... skipping 3 lines ...
Sep 20 04:27:03.716: INFO: namespace pod-disks-8009 deletion completed in 10.042638546s


• [SLOW TEST:11.406 seconds]
[sig-storage] Pod Disks
test/e2e/storage/utils/framework.go:23
  should be able to delete a non-existent PD without error
  test/e2e/storage/pd.go:435
------------------------------
SSSS
------------------------------
[BeforeEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:151
... skipping 6 lines ...
  test/e2e/common/security_context.go:40
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  test/e2e/common/security_context.go:211
Sep 20 04:26:49.678: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-4da1854c-eeab-4252-b96e-9641d70a19f8" in namespace "security-context-test-7634" to be "success or failure"
Sep 20 04:26:49.714: INFO: Pod "busybox-readonly-true-4da1854c-eeab-4252-b96e-9641d70a19f8": Phase="Pending", Reason="", readiness=false. Elapsed: 36.548349ms
Sep 20 04:26:51.766: INFO: Pod "busybox-readonly-true-4da1854c-eeab-4252-b96e-9641d70a19f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087973354s
Sep 20 04:26:53.802: INFO: Pod "busybox-readonly-true-4da1854c-eeab-4252-b96e-9641d70a19f8": Phase="Failed", Reason="", readiness=false. Elapsed: 4.1246087s
Sep 20 04:26:53.802: INFO: Pod "busybox-readonly-true-4da1854c-eeab-4252-b96e-9641d70a19f8" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:152
Sep 20 04:26:53.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7634" for this suite.
Sep 20 04:27:02.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 447 lines ...
  test/e2e/storage/in_tree_volumes.go:69
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:92
      should be able to unmount after the subpath directory is deleted
      test/e2e/storage/testsuites/subpath.go:424
------------------------------
SS{"component":"entrypoint","file":"prow/entrypoint/run.go:163","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","time":"2019-09-20T04:27:05Z"}
Traceback (most recent call last):
  File "../test-infra/scenarios/kubernetes_e2e.py", line 778, in <module>
    main(parse_args())
  File "../test-infra/scenarios/kubernetes_e2e.py", line 626, in main
    mode.start(runner_args)
  File "../test-infra/scenarios/kubernetes_e2e.py", line 262, in start
... skipping 13 lines ...