This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2019-10-19 01:22
Elapsed34m10s
Revision1163a1d51ed007ff2c3cd6fe548f60fc0b175a24
Refs 82703

No Test Failures!


Error lines from build-log.txt

... skipping 143 lines ...
INFO: 5231 processes: 5120 remote cache hit, 27 processwrapper-sandbox, 84 remote.
INFO: Build completed successfully, 5322 total actions
INFO: Build completed successfully, 5322 total actions
make: Leaving directory '/home/prow/go/src/k8s.io/kubernetes'
2019/10/19 01:30:52 process.go:155: Step 'make -C /home/prow/go/src/k8s.io/kubernetes bazel-release' finished in 8m25.315410171s
2019/10/19 01:30:52 util.go:277: Flushing memory.
2019/10/19 01:30:57 util.go:287: flushMem error (page cache): exit status 1
2019/10/19 01:30:57 process.go:153: Running: /home/prow/go/src/k8s.io/release/push-build.sh --nomock --verbose --noupdatelatest --bucket=kubernetes-release-pull --ci --gcs-suffix=/pull-kubernetes-e2e-gce --allow-dup
push-build.sh: BEGIN main on 492c7d82-f20e-11e9-872d-aab87b429caf Sat Oct 19 01:30:57 UTC 2019

$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
INFO: Invocation ID: 412a67b4-d08a-43ea-ba76-1d0a1a5bd13c
Loading: 
... skipping 875 lines ...
Trying to find master named 'e2e-b3be4e167f-abe28-master'
Looking for address 'e2e-b3be4e167f-abe28-master-ip'
Using master: e2e-b3be4e167f-abe28-master (external IP: 35.247.29.49; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

............Kubernetes cluster created.
Cluster "k8s-jkns-e2e-gce-serial-1-2_e2e-b3be4e167f-abe28" set.
User "k8s-jkns-e2e-gce-serial-1-2_e2e-b3be4e167f-abe28" set.
Context "k8s-jkns-e2e-gce-serial-1-2_e2e-b3be4e167f-abe28" created.
Switched to context "k8s-jkns-e2e-gce-serial-1-2_e2e-b3be4e167f-abe28".
... skipping 381 lines ...
[sig-storage] CSI Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  test/e2e/storage/csi_volumes.go:56
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      test/e2e/storage/testsuites/topology.go:95
------------------------------
... skipping 402 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:151
------------------------------
... skipping 225 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [aws] (not gce)

      test/e2e/storage/drivers/in_tree.go:1590
------------------------------
... skipping 1146 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Oct 19 01:47:00.847: INFO: Successfully updated pod "pod-update-activedeadlineseconds-c86bb856-036e-4d75-bff6-765a6da360d0"
Oct 19 01:47:00.847: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c86bb856-036e-4d75-bff6-765a6da360d0" in namespace "pods-3322" to be "terminated due to deadline exceeded"
Oct 19 01:47:00.886: INFO: Pod "pod-update-activedeadlineseconds-c86bb856-036e-4d75-bff6-765a6da360d0": Phase="Running", Reason="", readiness=true. Elapsed: 39.545744ms
Oct 19 01:47:03.138: INFO: Pod "pod-update-activedeadlineseconds-c86bb856-036e-4d75-bff6-765a6da360d0": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.291253854s
Oct 19 01:47:03.138: INFO: Pod "pod-update-activedeadlineseconds-c86bb856-036e-4d75-bff6-765a6da360d0" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:151
Oct 19 01:47:03.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3322" for this suite.

... skipping 150 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [aws] (not gce)

      test/e2e/storage/drivers/in_tree.go:1590
------------------------------
... skipping 381 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:151
------------------------------
... skipping 23 lines ...
  test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 19 01:46:53.960: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-204
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  test/e2e/apps/job.go:113
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:151
Oct 19 01:47:20.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-204" for this suite.


• [SLOW TEST:26.538 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  test/e2e/apps/job.go:113
------------------------------
S
------------------------------
[BeforeEach] [sig-storage] Projected combined
  test/e2e/framework/framework.go:150
... skipping 270 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:151
------------------------------
... skipping 5 lines ...
Oct 19 01:47:21.773: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename volume-provisioning
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-provisioning-3738
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Dynamic Provisioning
  test/e2e/storage/volume_provisioning.go:136
[It] should report an error and create no PV
  test/e2e/storage/volume_provisioning.go:778
Oct 19 01:47:23.094: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [sig-storage] Dynamic Provisioning
  test/e2e/framework/framework.go:151
Oct 19 01:47:23.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-provisioning-3738" for this suite.


S [SKIPPING] [1.744 seconds]
[sig-storage] Dynamic Provisioning
test/e2e/storage/utils/framework.go:23
  Invalid AWS KMS key
  test/e2e/storage/volume_provisioning.go:777
    should report an error and create no PV [It]
    test/e2e/storage/volume_provisioning.go:778

    Only supported for providers [aws] (not gce)

    test/e2e/storage/volume_provisioning.go:779
------------------------------
... skipping 965 lines ...
Oct 19 01:46:53.736: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-x4bsc] to have phase Bound
Oct 19 01:46:53.819: INFO: PersistentVolumeClaim pvc-x4bsc found but phase is Pending instead of Bound.
Oct 19 01:46:55.865: INFO: PersistentVolumeClaim pvc-x4bsc found and phase=Bound (2.128814091s)
Oct 19 01:46:55.865: INFO: Waiting up to 3m0s for PersistentVolume gce-j96j8 to have phase Bound
Oct 19 01:46:55.904: INFO: PersistentVolume gce-j96j8 found and phase=Bound (39.251964ms)
STEP: Creating the Client Pod
[It] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
  test/e2e/storage/persistent_volumes-gce.go:124
STEP: Deleting the Claim
Oct 19 01:47:20.196: INFO: Deleting PersistentVolumeClaim "pvc-x4bsc"
STEP: Deleting the Pod
Oct 19 01:47:20.578: INFO: Deleting pod "pvc-tester-x4xqw" in namespace "pv-6432"
Oct 19 01:47:20.653: INFO: Wait up to 5m0s for pod "pvc-tester-x4xqw" to be fully deleted
... skipping 14 lines ...
Oct 19 01:47:46.258: INFO: Successfully deleted PD "e2e-b3be4e167f-abe28-689c0fb7-f6e8-4a9e-936e-1577f8e13f8a".


• [SLOW TEST:57.194 seconds]
[sig-storage] PersistentVolumes GCEPD
test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
  test/e2e/storage/persistent_volumes-gce.go:124
------------------------------
[BeforeEach] [k8s.io] [sig-node] Security Context
  test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 19 01:47:38.654: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 867 lines ...
Oct 19 01:47:39.279: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:39.318: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:39.521: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:39.573: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:39.614: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:39.654: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:39.745: INFO: Lookups using dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3704.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3704.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local jessie_udp@dns-test-service-2.dns-3704.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3704.svc.cluster.local]

Oct 19 01:47:44.786: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:44.828: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:44.871: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:44.911: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:45.059: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:45.109: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:45.157: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:45.205: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:45.286: INFO: Lookups using dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3704.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3704.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local jessie_udp@dns-test-service-2.dns-3704.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3704.svc.cluster.local]

Oct 19 01:47:49.792: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:49.831: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:49.891: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:49.937: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:50.079: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:50.126: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:50.168: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:50.211: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:50.297: INFO: Lookups using dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3704.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3704.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local jessie_udp@dns-test-service-2.dns-3704.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3704.svc.cluster.local]

Oct 19 01:47:54.800: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:54.863: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:54.960: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:55.082: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:55.292: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:55.391: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:55.445: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:55.496: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:55.595: INFO: Lookups using dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3704.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3704.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local jessie_udp@dns-test-service-2.dns-3704.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3704.svc.cluster.local]

Oct 19 01:47:59.786: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:59.827: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:59.868: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:47:59.912: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:48:00.148: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:48:00.191: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:48:00.259: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:48:00.309: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:48:00.437: INFO: Lookups using dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3704.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3704.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local jessie_udp@dns-test-service-2.dns-3704.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3704.svc.cluster.local]

Oct 19 01:48:04.790: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:48:04.834: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:48:04.876: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:48:04.920: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:48:05.060: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:48:05.112: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:48:05.177: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:48:05.242: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3704.svc.cluster.local from pod dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c: the server could not find the requested resource (get pods dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c)
Oct 19 01:48:05.516: INFO: Lookups using dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3704.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3704.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3704.svc.cluster.local jessie_udp@dns-test-service-2.dns-3704.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3704.svc.cluster.local]

Oct 19 01:48:10.435: INFO: DNS probes using dns-3704/dns-test-e132a1db-2b0e-47c1-878e-556511d3be1c succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 45 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/common/sysctl.go:63
[It] should support unsafe sysctls which are actually whitelisted
  test/e2e/common/sysctl.go:110
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/framework/framework.go:151
... skipping 940 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-3122
STEP: Creating statefulset with conflicting port in namespace statefulset-3122
STEP: Waiting until pod test-pod will start running in namespace statefulset-3122
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3122
Oct 19 01:47:48.043: INFO: Observed stateful pod in namespace: statefulset-3122, name: ss-0, uid: 8043d1cb-bde8-4978-8c69-55a2da258671, status phase: Pending. Waiting for statefulset controller to delete.
Oct 19 01:47:49.295: INFO: Observed stateful pod in namespace: statefulset-3122, name: ss-0, uid: 8043d1cb-bde8-4978-8c69-55a2da258671, status phase: Failed. Waiting for statefulset controller to delete.
Oct 19 01:47:49.313: INFO: Observed stateful pod in namespace: statefulset-3122, name: ss-0, uid: 8043d1cb-bde8-4978-8c69-55a2da258671, status phase: Failed. Waiting for statefulset controller to delete.
Oct 19 01:47:49.339: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3122
STEP: Removing pod with conflicting port in namespace statefulset-3122
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3122 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:90
Oct 19 01:48:03.856: INFO: Deleting all statefulset in ns statefulset-3122
... skipping 224 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:151
------------------------------
... skipping 499 lines ...
Oct 19 01:48:09.404: INFO: rc: 1
STEP: cleaning the environment after flex
Oct 19 01:48:09.404: INFO: Deleting pod "flex-client" in namespace "flexvolume-8167"
Oct 19 01:48:09.445: INFO: Wait up to 5m0s for pod "flex-client" to be fully deleted
STEP: waiting for flex client pod to terminate
Oct 19 01:48:21.523: INFO: Waiting up to 5m0s for pod "flex-client" in namespace "flexvolume-8167" to be "terminated due to deadline exceeded"
Oct 19 01:48:21.599: INFO: Pod "flex-client" in namespace "flexvolume-8167" not found. Error: pods "flex-client" not found
STEP: uninstalling flexvolume dummy-attachable-flexvolume-8167 from node e2e-b3be4e167f-abe28-minion-group-4hd0
Oct 19 01:48:31.599: INFO: Getting external IP address for e2e-b3be4e167f-abe28-minion-group-4hd0
Oct 19 01:48:32.078: INFO: ssh prow@34.83.81.62:22: command:   sudo rm -r /home/kubernetes/flexvolume/k8s~dummy-attachable-flexvolume-8167
Oct 19 01:48:32.078: INFO: ssh prow@34.83.81.62:22: stdout:    ""
Oct 19 01:48:32.078: INFO: ssh prow@34.83.81.62:22: stderr:    ""
Oct 19 01:48:32.078: INFO: ssh prow@34.83.81.62:22: exit code: 0
... skipping 3009 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:151
------------------------------
... skipping 176 lines ...
STEP: Deleting the previously created pod
Oct 19 01:49:08.054: INFO: Deleting pod "pvc-volume-tester-6qjfr" in namespace "csi-mock-volumes-199"
Oct 19 01:49:08.094: INFO: Wait up to 5m0s for pod "pvc-volume-tester-6qjfr" to be fully deleted
STEP: Checking CSI driver logs
Oct 19 01:49:20.211: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-199","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-199","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-199","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-199","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-06ca5cdf-82a1-47de-8cfe-7d6c92b7cbd3","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-06ca5cdf-82a1-47de-8cfe-7d6c92b7cbd3"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-199","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-06ca5cdf-82a1-47de-8cfe-7d6c92b7cbd3","storage.kubernetes.io/csiProvisionerIdentity":"1571449719122-8081-csi-mock-csi-mock-volumes-199"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-06ca5cdf-82a1-47de-8cfe-7d6c92b7cbd3/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-06ca5cdf-82a1-47de-8cfe-7d6c92b7cbd3","storage.kubernetes.io/csiProvisionerIdentity":"1571449719122-8081-csi-mock-csi-mock-volumes-199"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-06ca5cdf-82a1-47de-8cfe-7d6c92b7cbd3/globalmount","target_path":"/var/lib/kubelet/pods/16acc5c8-e209-4d70-8edf-508d99f5612f/volumes/kubernetes.io~csi/pvc-06ca5cdf-82a1-47de-8cfe-7d6c92b7cbd3/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-06ca5cdf-82a1-47de-8cfe-7d6c92b7cbd3","storage.kubernetes.io/csiProvisionerIdentity":"1571449719122-8081-csi-mock-csi-mock-volumes-199"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/16acc5c8-e209-4d70-8edf-508d99f5612f/volumes/kubernetes.io~csi/pvc-06ca5cdf-82a1-47de-8cfe-7d6c92b7cbd3/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-06ca5cdf-82a1-47de-8cfe-7d6c92b7cbd3/globalmount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerUnpublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-199"},"Response":{},"Error":""}

Oct 19 01:49:20.211: INFO: Found NodeUnpublishVolume: {Method:/csi.v1.Node/NodeUnpublishVolume Request:{VolumeContext:map[]}}
STEP: Deleting pod pvc-volume-tester-6qjfr
Oct 19 01:49:20.211: INFO: Deleting pod "pvc-volume-tester-6qjfr" in namespace "csi-mock-volumes-199"
STEP: Deleting claim pvc-jdg2b
Oct 19 01:49:20.352: INFO: Waiting up to 2m0s for PersistentVolume pvc-06ca5cdf-82a1-47de-8cfe-7d6c92b7cbd3 to get deleted
... skipping 555 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not gce)

      test/e2e/storage/drivers/in_tree.go:1449
------------------------------
... skipping 365 lines ...
Oct 19 01:49:29.040: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  test/e2e/framework/framework.go:691
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:151
Oct 19 01:49:42.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7" for this suite.
STEP: Destroying namespace "webhook-7-markers" for this suite.
... skipping 1483 lines ...
Oct 19 01:49:26.809: INFO: PersistentVolumeClaim csi-hostpathx52sz found but phase is Pending instead of Bound.
Oct 19 01:49:28.867: INFO: PersistentVolumeClaim csi-hostpathx52sz found but phase is Pending instead of Bound.
Oct 19 01:49:30.904: INFO: PersistentVolumeClaim csi-hostpathx52sz found but phase is Pending instead of Bound.
Oct 19 01:49:32.965: INFO: PersistentVolumeClaim csi-hostpathx52sz found and phase=Bound (40.963723703s)
STEP: Expanding non-expandable pvc
Oct 19 01:49:33.054: INFO: currentPvcSize {{1048576 0} {<nil>} 1Mi BinarySI}, newSize {{1074790400 0} {<nil>}  BinarySI}
Oct 19 01:49:33.136: INFO: Error updating pvc csi-hostpathx52sz with persistentvolumeclaims "csi-hostpathx52sz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 01:49:35.214: INFO: Error updating pvc csi-hostpathx52sz with persistentvolumeclaims "csi-hostpathx52sz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 01:49:37.282: INFO: Error updating pvc csi-hostpathx52sz with persistentvolumeclaims "csi-hostpathx52sz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 01:49:39.206: INFO: Error updating pvc csi-hostpathx52sz with persistentvolumeclaims "csi-hostpathx52sz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 01:49:41.225: INFO: Error updating pvc csi-hostpathx52sz with persistentvolumeclaims "csi-hostpathx52sz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 01:49:43.213: INFO: Error updating pvc csi-hostpathx52sz with persistentvolumeclaims "csi-hostpathx52sz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 01:49:45.209: INFO: Error updating pvc csi-hostpathx52sz with persistentvolumeclaims "csi-hostpathx52sz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 01:49:47.214: INFO: Error updating pvc csi-hostpathx52sz with persistentvolumeclaims "csi-hostpathx52sz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 01:49:49.239: INFO: Error updating pvc csi-hostpathx52sz with persistentvolumeclaims "csi-hostpathx52sz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 01:49:51.208: INFO: Error updating pvc csi-hostpathx52sz with persistentvolumeclaims "csi-hostpathx52sz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 01:49:53.222: INFO: Error updating pvc csi-hostpathx52sz with persistentvolumeclaims "csi-hostpathx52sz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 01:49:55.215: INFO: Error updating pvc csi-hostpathx52sz with persistentvolumeclaims "csi-hostpathx52sz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 01:49:57.213: INFO: Error updating pvc csi-hostpathx52sz with persistentvolumeclaims "csi-hostpathx52sz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 01:49:59.211: INFO: Error updating pvc csi-hostpathx52sz with persistentvolumeclaims "csi-hostpathx52sz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 01:50:01.425: INFO: Error updating pvc csi-hostpathx52sz with persistentvolumeclaims "csi-hostpathx52sz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 01:50:03.207: INFO: Error updating pvc csi-hostpathx52sz with persistentvolumeclaims "csi-hostpathx52sz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 01:50:03.279: INFO: Error updating pvc csi-hostpathx52sz with persistentvolumeclaims "csi-hostpathx52sz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Oct 19 01:50:03.279: INFO: Deleting PersistentVolumeClaim "csi-hostpathx52sz"
Oct 19 01:50:03.319: INFO: Waiting up to 5m0s for PersistentVolume pvc-4fb335c9-5ea9-4959-a5ec-4787fb32fb7d to get deleted
Oct 19 01:50:03.373: INFO: PersistentVolume pvc-4fb335c9-5ea9-4959-a5ec-4787fb32fb7d found and phase=Bound (54.462867ms)
Oct 19 01:50:08.413: INFO: PersistentVolume pvc-4fb335c9-5ea9-4959-a5ec-4787fb32fb7d was removed
STEP: Deleting sc
... skipping 266 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:151
------------------------------
... skipping 1344 lines ...
Oct 19 01:46:49.702: INFO: Found ClusterRoles; assuming RBAC is enabled.
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in node-problem-detector-803
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters]
  test/e2e/node/node_problem_detector.go:49
Oct 19 01:46:49.938: INFO: Waiting up to 1m0s for all nodes to be ready
[It] should run without error
  test/e2e/node/node_problem_detector.go:57
STEP: Getting all nodes and their SSH-able IP addresses
STEP: Check node "34.83.81.62:22" has node-problem-detector process
STEP: Check node-problem-detector is running fine on node "34.83.81.62:22"
STEP: Inject log to trigger AUFSUmountHung on node "34.83.81.62:22"
STEP: Check node "34.82.30.53:22" has node-problem-detector process
... skipping 23 lines ...
STEP: Destroying namespace "node-problem-detector-803" for this suite.


• [SLOW TEST:221.396 seconds]
[k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters]
test/e2e/framework/framework.go:686
  should run without error
  test/e2e/node/node_problem_detector.go:57
------------------------------
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 19 01:50:30.418: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 185 lines ...
STEP: Deleting the previously created pod
Oct 19 01:50:24.585: INFO: Deleting pod "pvc-volume-tester-lgmt7" in namespace "csi-mock-volumes-9812"
Oct 19 01:50:24.623: INFO: Wait up to 5m0s for pod "pvc-volume-tester-lgmt7" to be fully deleted
STEP: Checking CSI driver logs
Oct 19 01:50:30.776: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9812","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9812","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-108e895e-bff1-4685-9ea6-a2461aa39d0c","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-108e895e-bff1-4685-9ea6-a2461aa39d0c"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-9812","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9812","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-9812","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-108e895e-bff1-4685-9ea6-a2461aa39d0c","storage.kubernetes.io/csiProvisionerIdentity":"1571449806991-8081-csi-mock-csi-mock-volumes-9812"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-108e895e-bff1-4685-9ea6-a2461aa39d0c/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-108e895e-bff1-4685-9ea6-a2461aa39d0c","storage.kubernetes.io/csiProvisionerIdentity":"1571449806991-8081-csi-mock-csi-mock-volumes-9812"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-108e895e-bff1-4685-9ea6-a2461aa39d0c/globalmount","target_path":"/var/lib/kubelet/pods/a25c4212-48d1-4da9-8481-bda1fbb6fbb0/volumes/kubernetes.io~csi/pvc-108e895e-bff1-4685-9ea6-a2461aa39d0c/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-108e895e-bff1-4685-9ea6-a2461aa39d0c","storage.kubernetes.io/csiProvisionerIdentity":"1571449806991-8081-csi-mock-csi-mock-volumes-9812"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/a25c4212-48d1-4da9-8481-bda1fbb6fbb0/volumes/kubernetes.io~csi/pvc-108e895e-bff1-4685-9ea6-a2461aa39d0c/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-108e895e-bff1-4685-9ea6-a2461aa39d0c/globalmount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerUnpublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-9812"},"Response":{},"Error":""}

Oct 19 01:50:30.776: INFO: Found NodeUnpublishVolume: {Method:/csi.v1.Node/NodeUnpublishVolume Request:{VolumeContext:map[]}}
STEP: Deleting pod pvc-volume-tester-lgmt7
Oct 19 01:50:30.776: INFO: Deleting pod "pvc-volume-tester-lgmt7" in namespace "csi-mock-volumes-9812"
STEP: Deleting claim pvc-m2khr
Oct 19 01:50:30.891: INFO: Waiting up to 2m0s for PersistentVolume pvc-108e895e-bff1-4685-9ea6-a2461aa39d0c to get deleted
... skipping 765 lines ...
Oct 19 01:46:49.440: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled.
Oct 19 01:46:49.605: INFO: Found ClusterRoles; assuming RBAC is enabled.
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-7975
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] CronJob
  test/e2e/apps/cronjob.go:55
[It] should delete successful/failed finished jobs with limit of one job
  test/e2e/apps/cronjob.go:233
STEP: Creating a AllowConcurrent cronjob with custom successful-jobs-history-limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
STEP: Removing cronjob
STEP: Creating a AllowConcurrent cronjob with custom failed-jobs-history-limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
STEP: Removing cronjob
[AfterEach] [sig-apps] CronJob
... skipping 2 lines ...
STEP: Destroying namespace "cronjob-7975" for this suite.


• [SLOW TEST:231.676 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should delete successful/failed finished jobs with limit of one job
  test/e2e/apps/cronjob.go:233
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/storage/testsuites/base.go:98
Oct 19 01:50:40.644: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
... skipping 1198 lines ...
Oct 19 01:50:45.617: INFO: Pod exec-volume-test-gcepd-preprovisionedpv-zx6c no longer exists
STEP: Deleting pod exec-volume-test-gcepd-preprovisionedpv-zx6c
Oct 19 01:50:45.618: INFO: Deleting pod "exec-volume-test-gcepd-preprovisionedpv-zx6c" in namespace "volume-4975"
STEP: Deleting pv and pvc
Oct 19 01:50:45.722: INFO: Deleting PersistentVolumeClaim "pvc-lxllj"
Oct 19 01:50:45.853: INFO: Deleting PersistentVolume "gcepd-zljzn"
Oct 19 01:50:47.553: INFO: error deleting PD "e2e-b3be4e167f-abe28-701e92be-3116-476e-9424-71ce118abb1f": googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/disks/e2e-b3be4e167f-abe28-701e92be-3116-476e-9424-71ce118abb1f' is already being used by 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/instances/e2e-b3be4e167f-abe28-minion-group-69t5', resourceInUseByAnotherResource
Oct 19 01:50:47.554: INFO: Couldn't delete PD "e2e-b3be4e167f-abe28-701e92be-3116-476e-9424-71ce118abb1f", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/disks/e2e-b3be4e167f-abe28-701e92be-3116-476e-9424-71ce118abb1f' is already being used by 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/instances/e2e-b3be4e167f-abe28-minion-group-69t5', resourceInUseByAnotherResource
Oct 19 01:50:54.291: INFO: error deleting PD "e2e-b3be4e167f-abe28-701e92be-3116-476e-9424-71ce118abb1f": googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/disks/e2e-b3be4e167f-abe28-701e92be-3116-476e-9424-71ce118abb1f' is already being used by 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/instances/e2e-b3be4e167f-abe28-minion-group-69t5', resourceInUseByAnotherResource
Oct 19 01:50:54.291: INFO: Couldn't delete PD "e2e-b3be4e167f-abe28-701e92be-3116-476e-9424-71ce118abb1f", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/disks/e2e-b3be4e167f-abe28-701e92be-3116-476e-9424-71ce118abb1f' is already being used by 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/instances/e2e-b3be4e167f-abe28-minion-group-69t5', resourceInUseByAnotherResource
Oct 19 01:51:01.881: INFO: Successfully deleted PD "e2e-b3be4e167f-abe28-701e92be-3116-476e-9424-71ce118abb1f".
Oct 19 01:51:01.881: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/framework/framework.go:151
Oct 19 01:51:01.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4975" for this suite.
... skipping 533 lines ...
Oct 19 01:50:48.605: INFO: Creating resource for dynamic PV
Oct 19 01:50:48.605: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(gcepd) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-expand-7735-gcepd-sc4774m
STEP: creating a claim
STEP: Expanding non-expandable pvc
Oct 19 01:50:48.724: INFO: currentPvcSize {{5368709120 0} {<nil>} 5Gi BinarySI}, newSize {{6442450944 0} {<nil>}  BinarySI}
Oct 19 01:50:48.801: INFO: Error updating pvc gcepd796h4 with PersistentVolumeClaim "gcepd796h4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Oct 19 01:50:50.899: INFO: Error updating pvc gcepd796h4 with PersistentVolumeClaim "gcepd796h4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Oct 19 01:50:52.945: INFO: Error updating pvc gcepd796h4 with PersistentVolumeClaim "gcepd796h4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Oct 19 01:50:54.887: INFO: Error updating pvc gcepd796h4 with PersistentVolumeClaim "gcepd796h4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Oct 19 01:50:56.891: INFO: Error updating pvc gcepd796h4 with PersistentVolumeClaim "gcepd796h4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Oct 19 01:50:58.896: INFO: Error updating pvc gcepd796h4 with PersistentVolumeClaim "gcepd796h4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Oct 19 01:51:00.879: INFO: Error updating pvc gcepd796h4 with PersistentVolumeClaim "gcepd796h4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Oct 19 01:51:02.909: INFO: Error updating pvc gcepd796h4 with PersistentVolumeClaim "gcepd796h4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Oct 19 01:51:04.885: INFO: Error updating pvc gcepd796h4 with PersistentVolumeClaim "gcepd796h4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Oct 19 01:51:06.887: INFO: Error updating pvc gcepd796h4 with PersistentVolumeClaim "gcepd796h4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Oct 19 01:51:08.883: INFO: Error updating pvc gcepd796h4 with PersistentVolumeClaim "gcepd796h4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Oct 19 01:51:10.879: INFO: Error updating pvc gcepd796h4 with PersistentVolumeClaim "gcepd796h4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Oct 19 01:51:13.099: INFO: Error updating pvc gcepd796h4 with PersistentVolumeClaim "gcepd796h4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Oct 19 01:51:14.935: INFO: Error updating pvc gcepd796h4 with PersistentVolumeClaim "gcepd796h4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Oct 19 01:51:16.949: INFO: Error updating pvc gcepd796h4 with PersistentVolumeClaim "gcepd796h4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Oct 19 01:51:18.886: INFO: Error updating pvc gcepd796h4 with PersistentVolumeClaim "gcepd796h4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Oct 19 01:51:18.967: INFO: Error updating pvc gcepd796h4 with PersistentVolumeClaim "gcepd796h4" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
STEP: Deleting pvc
Oct 19 01:51:18.967: INFO: Deleting PersistentVolumeClaim "gcepd796h4"
STEP: Deleting sc
Oct 19 01:51:19.054: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  test/e2e/framework/framework.go:151
... skipping 460 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:151
------------------------------
... skipping 4110 lines ...
Oct 19 01:52:06.926: INFO: Waiting for PV local-pvt4zsn to bind to PVC pvc-b5wbh
Oct 19 01:52:06.926: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-b5wbh] to have phase Bound
Oct 19 01:52:06.981: INFO: PersistentVolumeClaim pvc-b5wbh found but phase is Pending instead of Bound.
Oct 19 01:52:09.070: INFO: PersistentVolumeClaim pvc-b5wbh found and phase=Bound (2.143508838s)
Oct 19 01:52:09.070: INFO: Waiting up to 3m0s for PersistentVolume local-pvt4zsn to have phase Bound
Oct 19 01:52:09.180: INFO: PersistentVolume local-pvt4zsn found and phase=Bound (109.864494ms)
[It] should fail scheduling due to different NodeAffinity
  test/e2e/storage/persistent_volumes-local.go:359
STEP: local-volume-type: dir
STEP: Initializing test volumes
Oct 19 01:52:09.413: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.29.49 --kubeconfig=/workspace/.kube/config exec --namespace=persistent-local-volumes-test-7825 hostexec-e2e-b3be4e167f-abe28-minion-group-4hd0-v6gtm -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-53049d18-6bd8-495c-9818-52f59b6f41da'
Oct 19 01:52:11.253: INFO: stderr: ""
Oct 19 01:52:11.253: INFO: stdout: ""
... skipping 25 lines ...

• [SLOW TEST:25.232 seconds]
[sig-storage] PersistentVolumes-local 
test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  test/e2e/storage/persistent_volumes-local.go:337
    should fail scheduling due to different NodeAffinity
    test/e2e/storage/persistent_volumes-local.go:359
------------------------------
S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:98
... skipping 1191 lines ...
Oct 19 01:52:35.584: INFO: AfterEach: Cleaning up test resources


S [SKIPPING] in Spec Setup (BeforeEach) [0.464 seconds]
[sig-storage] PersistentVolumes:vsphere
test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on vsphere volume detach [BeforeEach]
  test/e2e/storage/vsphere/persistent_volumes-vsphere.go:147

  Only supported for providers [vsphere] (not gce)

  test/e2e/storage/vsphere/persistent_volumes-vsphere.go:63
------------------------------
... skipping 896 lines ...
  test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 19 01:52:43.801: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in topology-639
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  test/e2e/storage/testsuites/topology.go:192
Oct 19 01:52:44.263: INFO: found topology map[failure-domain.beta.kubernetes.io/zone:us-west1-b]
Oct 19 01:52:44.403: INFO: Node name not specified for getVolumeOpCounts, falling back to listing nodes from API Server
Oct 19 01:52:46.801: INFO: Node name not specified for getVolumeOpCounts, falling back to listing nodes from API Server
Oct 19 01:52:51.576: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
... skipping 9 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 280 lines ...
Oct 19 01:52:07.749: INFO: creating *v1.StatefulSet: csi-mock-volumes-2683/csi-mockplugin
Oct 19 01:52:07.807: INFO: creating *v1beta1.CSIDriver: csi-mock-csi-mock-volumes-2683
Oct 19 01:52:07.854: INFO: creating *v1.StatefulSet: csi-mock-volumes-2683/csi-mockplugin-attacher
Oct 19 01:52:07.914: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2683"
STEP: Creating pod
STEP: checking for CSIInlineVolumes feature
Oct 19 01:52:36.247: INFO: Error getting logs for pod csi-inline-volume-rg2z6: the server rejected our request for an unknown reason (get pods csi-inline-volume-rg2z6)
STEP: Deleting pod csi-inline-volume-rg2z6 in namespace csi-mock-volumes-2683
STEP: Deleting the previously created pod
Oct 19 01:52:46.403: INFO: Deleting pod "pvc-volume-tester-hwsnr" in namespace "csi-mock-volumes-2683"
Oct 19 01:52:46.481: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hwsnr" to be fully deleted
STEP: Checking CSI driver logs
Oct 19 01:52:56.653: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2683","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2683","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2683","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-2683","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"csi-6b9531f4cbed232c5e640884164b3c3c264704c9d7c7bd36db8cf233d171460e","target_path":"/var/lib/kubelet/pods/9dd139cb-0ee8-4026-9b4b-704d33244b5f/volumes/kubernetes.io~csi/my-volume/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"true","csi.storage.k8s.io/pod.name":"pvc-volume-tester-hwsnr","csi.storage.k8s.io/pod.namespace":"csi-mock-volumes-2683","csi.storage.k8s.io/pod.uid":"9dd139cb-0ee8-4026-9b4b-704d33244b5f","csi.storage.k8s.io/serviceAccount.name":"default"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"csi-6b9531f4cbed232c5e640884164b3c3c264704c9d7c7bd36db8cf233d171460e","volume_path":"/var/lib/kubelet/pods/9dd139cb-0ee8-4026-9b4b-704d33244b5f/volumes/kubernetes.io~csi/my-volume/mount"},"Response":null,"Error":"rpc error: code = NotFound desc = csi-6b9531f4cbed232c5e640884164b3c3c264704c9d7c7bd36db8cf233d171460e"}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-6b9531f4cbed232c5e640884164b3c3c264704c9d7c7bd36db8cf233d171460e","target_path":"/var/lib/kubelet/pods/9dd139cb-0ee8-4026-9b4b-704d33244b5f/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":""}

Oct 19 01:52:56.654: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Oct 19 01:52:56.654: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-hwsnr
Oct 19 01:52:56.654: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-2683
Oct 19 01:52:56.654: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 9dd139cb-0ee8-4026-9b4b-704d33244b5f
Oct 19 01:52:56.654: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true
... skipping 499 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:151
------------------------------
... skipping 646 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:151
------------------------------
... skipping 307 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:151
------------------------------
... skipping 101 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Driver gluster doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:151
------------------------------
... skipping 2821 lines ...
Oct 19 01:53:45.644: INFO: Trying to get logs from node e2e-b3be4e167f-abe28-minion-group-92xt pod exec-volume-test-gcepd-9q9s container exec-container-gcepd-9q9s: <nil>
STEP: delete the pod
Oct 19 01:53:45.784: INFO: Waiting for pod exec-volume-test-gcepd-9q9s to disappear
Oct 19 01:53:45.824: INFO: Pod exec-volume-test-gcepd-9q9s no longer exists
STEP: Deleting pod exec-volume-test-gcepd-9q9s
Oct 19 01:53:45.824: INFO: Deleting pod "exec-volume-test-gcepd-9q9s" in namespace "volume-2434"
Oct 19 01:53:47.622: INFO: error deleting PD "e2e-b3be4e167f-abe28-af17bec6-017e-4f23-976d-f685ab8d8959": googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/disks/e2e-b3be4e167f-abe28-af17bec6-017e-4f23-976d-f685ab8d8959' is already being used by 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/instances/e2e-b3be4e167f-abe28-minion-group-92xt', resourceInUseByAnotherResource
Oct 19 01:53:47.622: INFO: Couldn't delete PD "e2e-b3be4e167f-abe28-af17bec6-017e-4f23-976d-f685ab8d8959", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/disks/e2e-b3be4e167f-abe28-af17bec6-017e-4f23-976d-f685ab8d8959' is already being used by 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/instances/e2e-b3be4e167f-abe28-minion-group-92xt', resourceInUseByAnotherResource
Oct 19 01:53:55.215: INFO: Successfully deleted PD "e2e-b3be4e167f-abe28-af17bec6-017e-4f23-976d-f685ab8d8959".
Oct 19 01:53:55.215: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/framework/framework.go:151
Oct 19 01:53:55.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-2434" for this suite.
... skipping 1186 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not gce)

      test/e2e/storage/drivers/in_tree.go:1449
------------------------------
... skipping 225 lines ...
STEP: waiting for the service to expose an endpoint
STEP: waiting up to 3m0s for service hairpin-test in namespace services-9775 to expose endpoints map[hairpin:[8080]]
Oct 19 01:54:08.869: INFO: successfully validated that service hairpin-test in namespace services-9775 exposes endpoints map[hairpin:[8080]] (96.107949ms elapsed)
STEP: Checking if the pod can reach itself
Oct 19 01:54:09.870: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.29.49 --kubeconfig=/workspace/.kube/config exec --namespace=services-9775 hairpin -- /bin/sh -x -c nc -zv -t -w 2 hairpin-test 8080'
Oct 19 01:54:12.005: INFO: rc: 1
Oct 19 01:54:12.005: INFO: Service reachability failing with error: error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.247.29.49 --kubeconfig=/workspace/.kube/config exec --namespace=services-9775 hairpin -- /bin/sh -x -c nc -zv -t -w 2 hairpin-test 8080] []  <nil>  + nc -zv -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) failed: Connection refused
command terminated with exit code 1
 [] <nil> 0xc00286a840 exit status 1 <nil> <nil> true [0xc001bbc648 0xc001bbc668 0xc001bbc690] [0xc001bbc648 0xc001bbc668 0xc001bbc690] [0xc001bbc660 0xc001bbc680] [0x10f1850 0x10f1850] 0xc0018a5ce0 <nil>}:
Command stdout:

stderr:
+ nc -zv -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 01:54:13.005: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.29.49 --kubeconfig=/workspace/.kube/config exec --namespace=services-9775 hairpin -- /bin/sh -x -c nc -zv -t -w 2 hairpin-test 8080'
Oct 19 01:54:15.399: INFO: rc: 1
Oct 19 01:54:15.399: INFO: Service reachability failing with error: error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.247.29.49 --kubeconfig=/workspace/.kube/config exec --namespace=services-9775 hairpin -- /bin/sh -x -c nc -zv -t -w 2 hairpin-test 8080] []  <nil>  + nc -zv -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) failed: Connection refused
command terminated with exit code 1
 [] <nil> 0xc00286b080 exit status 1 <nil> <nil> true [0xc001bbc6a0 0xc001bbc6b8 0xc001bbc6e8] [0xc001bbc6a0 0xc001bbc6b8 0xc001bbc6e8] [0xc001bbc6b0 0xc001bbc6d8] [0x10f1850 0x10f1850] 0xc00157d4a0 <nil>}:
Command stdout:

stderr:
+ nc -zv -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 01:54:16.005: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.29.49 --kubeconfig=/workspace/.kube/config exec --namespace=services-9775 hairpin -- /bin/sh -x -c nc -zv -t -w 2 hairpin-test 8080'
Oct 19 01:54:18.190: INFO: stderr: "+ nc -zv -t -w 2 hairpin-test 8080\nConnection to hairpin-test 8080 port [tcp/http-alt] succeeded!\n"
Oct 19 01:54:18.190: INFO: stdout: ""
Oct 19 01:54:18.191: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.29.49 --kubeconfig=/workspace/.kube/config exec --namespace=services-9775 hairpin -- /bin/sh -x -c nc -zv -t -w 2 10.0.191.39 8080'
... skipping 567 lines ...
[sig-storage] CSI Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath-v0]
  test/e2e/storage/csi_volumes.go:56
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath-v0" does not support topology - skipping

      test/e2e/storage/testsuites/topology.go:95
------------------------------
... skipping 60 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:151
------------------------------
... skipping 871 lines ...
Oct 19 01:54:24.545: INFO: Pod exec-volume-test-gcepd-preprovisionedpv-j86b no longer exists
STEP: Deleting pod exec-volume-test-gcepd-preprovisionedpv-j86b
Oct 19 01:54:24.545: INFO: Deleting pod "exec-volume-test-gcepd-preprovisionedpv-j86b" in namespace "volume-6193"
STEP: Deleting pv and pvc
Oct 19 01:54:24.585: INFO: Deleting PersistentVolumeClaim "pvc-svzkw"
Oct 19 01:54:24.623: INFO: Deleting PersistentVolume "gcepd-2864b"
Oct 19 01:54:26.293: INFO: error deleting PD "e2e-b3be4e167f-abe28-e36a240a-620c-4d93-a5f1-ada13d8fcc48": googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/disks/e2e-b3be4e167f-abe28-e36a240a-620c-4d93-a5f1-ada13d8fcc48' is already being used by 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/instances/e2e-b3be4e167f-abe28-minion-group-4hd0', resourceInUseByAnotherResource
Oct 19 01:54:26.293: INFO: Couldn't delete PD "e2e-b3be4e167f-abe28-e36a240a-620c-4d93-a5f1-ada13d8fcc48", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/disks/e2e-b3be4e167f-abe28-e36a240a-620c-4d93-a5f1-ada13d8fcc48' is already being used by 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/instances/e2e-b3be4e167f-abe28-minion-group-4hd0', resourceInUseByAnotherResource
Oct 19 01:54:32.941: INFO: error deleting PD "e2e-b3be4e167f-abe28-e36a240a-620c-4d93-a5f1-ada13d8fcc48": googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/disks/e2e-b3be4e167f-abe28-e36a240a-620c-4d93-a5f1-ada13d8fcc48' is already being used by 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/instances/e2e-b3be4e167f-abe28-minion-group-4hd0', resourceInUseByAnotherResource
Oct 19 01:54:32.941: INFO: Couldn't delete PD "e2e-b3be4e167f-abe28-e36a240a-620c-4d93-a5f1-ada13d8fcc48", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/disks/e2e-b3be4e167f-abe28-e36a240a-620c-4d93-a5f1-ada13d8fcc48' is already being used by 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/instances/e2e-b3be4e167f-abe28-minion-group-4hd0', resourceInUseByAnotherResource
Oct 19 01:54:40.489: INFO: Successfully deleted PD "e2e-b3be4e167f-abe28-e36a240a-620c-4d93-a5f1-ada13d8fcc48".
Oct 19 01:54:40.489: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/framework/framework.go:151
Oct 19 01:54:40.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-6193" for this suite.
... skipping 97 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:151
------------------------------
... skipping 70 lines ...
Oct 19 01:53:56.194: INFO: Exec stderr: ""
Oct 19 01:54:10.321: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.29.49 --kubeconfig=/workspace/.kube/config exec --namespace=mount-propagation-9481 hostexec-e2e-b3be4e167f-abe28-minion-group-4hd0-fjj2v -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-9481"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-9481"/host; echo host > "/var/lib/kubelet/mount-propagation-9481"/host/file'
Oct 19 01:54:11.105: INFO: stderr: ""
Oct 19 01:54:11.105: INFO: stdout: ""
Oct 19 01:54:11.140: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-9481 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:11.140: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:11.610: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Oct 19 01:54:11.702: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-9481 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:11.702: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:12.405: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Oct 19 01:54:12.544: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-9481 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:12.544: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:13.735: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Oct 19 01:54:13.835: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-9481 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:13.835: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:14.425: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Oct 19 01:54:14.471: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-9481 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:14.471: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:14.960: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Oct 19 01:54:14.996: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-9481 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:14.996: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:15.764: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Oct 19 01:54:15.813: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-9481 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:15.813: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:16.868: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Oct 19 01:54:16.908: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-9481 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:16.908: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:19.084: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Oct 19 01:54:19.214: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-9481 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:19.214: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:20.982: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Oct 19 01:54:21.118: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-9481 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:21.118: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:22.871: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Oct 19 01:54:22.916: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-9481 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:22.916: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:23.413: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Oct 19 01:54:23.461: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-9481 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:23.461: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:24.107: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Oct 19 01:54:24.146: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-9481 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:24.146: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:25.180: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Oct 19 01:54:25.231: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-9481 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:25.232: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:26.608: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Oct 19 01:54:26.651: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-9481 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:26.651: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:28.475: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Oct 19 01:54:28.515: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-9481 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:28.515: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:29.997: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Oct 19 01:54:30.033: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-9481 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:30.033: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:31.383: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Oct 19 01:54:31.422: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-9481 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:31.423: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:32.019: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Oct 19 01:54:32.057: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-9481 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:32.057: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:32.988: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Oct 19 01:54:33.027: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-9481 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:33.027: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:33.882: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Oct 19 01:54:33.883: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.29.49 --kubeconfig=/workspace/.kube/config exec --namespace=mount-propagation-9481 hostexec-e2e-b3be4e167f-abe28-minion-group-4hd0-fjj2v -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-9481"/master/file` = master'
Oct 19 01:54:35.219: INFO: stderr: ""
Oct 19 01:54:35.219: INFO: stdout: ""
Oct 19 01:54:35.219: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.29.49 --kubeconfig=/workspace/.kube/config exec --namespace=mount-propagation-9481 hostexec-e2e-b3be4e167f-abe28-minion-group-4hd0-fjj2v -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-9481"/slave/file'
Oct 19 01:54:36.064: INFO: stderr: ""
Oct 19 01:54:36.064: INFO: stdout: ""
... skipping 211 lines ...
STEP: cleaning the environment after gcepd
Oct 19 01:54:18.544: INFO: Deleting pod "gcepd-client" in namespace "volume-9833"
Oct 19 01:54:18.590: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
STEP: Deleting pv and pvc
Oct 19 01:54:34.674: INFO: Deleting PersistentVolumeClaim "pvc-vlqzk"
Oct 19 01:54:34.721: INFO: Deleting PersistentVolume "gcepd-dcx6r"
Oct 19 01:54:36.440: INFO: error deleting PD "e2e-b3be4e167f-abe28-5fb2e210-d4fa-4378-af92-1f394c1ed34e": googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/disks/e2e-b3be4e167f-abe28-5fb2e210-d4fa-4378-af92-1f394c1ed34e' is already being used by 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/instances/e2e-b3be4e167f-abe28-minion-group-69t5', resourceInUseByAnotherResource
Oct 19 01:54:36.440: INFO: Couldn't delete PD "e2e-b3be4e167f-abe28-5fb2e210-d4fa-4378-af92-1f394c1ed34e", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/disks/e2e-b3be4e167f-abe28-5fb2e210-d4fa-4378-af92-1f394c1ed34e' is already being used by 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/instances/e2e-b3be4e167f-abe28-minion-group-69t5', resourceInUseByAnotherResource
Oct 19 01:54:43.928: INFO: Successfully deleted PD "e2e-b3be4e167f-abe28-5fb2e210-d4fa-4378-af92-1f394c1ed34e".
Oct 19 01:54:43.928: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  test/e2e/framework/framework.go:151
Oct 19 01:54:43.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-9833" for this suite.
... skipping 1414 lines ...
Oct 19 01:55:02.578: INFO: Trying to get logs from node e2e-b3be4e167f-abe28-minion-group-92xt pod exec-volume-test-gcepd-txfc container exec-container-gcepd-txfc: <nil>
STEP: delete the pod
Oct 19 01:55:02.693: INFO: Waiting for pod exec-volume-test-gcepd-txfc to disappear
Oct 19 01:55:02.730: INFO: Pod exec-volume-test-gcepd-txfc no longer exists
STEP: Deleting pod exec-volume-test-gcepd-txfc
Oct 19 01:55:02.730: INFO: Deleting pod "exec-volume-test-gcepd-txfc" in namespace "volume-1334"
Oct 19 01:55:04.590: INFO: error deleting PD "e2e-b3be4e167f-abe28-f5816557-8df0-4426-920c-58fb35739c1a": googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/disks/e2e-b3be4e167f-abe28-f5816557-8df0-4426-920c-58fb35739c1a' is already being used by 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/instances/e2e-b3be4e167f-abe28-minion-group-92xt', resourceInUseByAnotherResource
Oct 19 01:55:04.590: INFO: Couldn't delete PD "e2e-b3be4e167f-abe28-f5816557-8df0-4426-920c-58fb35739c1a", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/disks/e2e-b3be4e167f-abe28-f5816557-8df0-4426-920c-58fb35739c1a' is already being used by 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/instances/e2e-b3be4e167f-abe28-minion-group-92xt', resourceInUseByAnotherResource
Oct 19 01:55:12.196: INFO: Successfully deleted PD "e2e-b3be4e167f-abe28-f5816557-8df0-4426-920c-58fb35739c1a".
Oct 19 01:55:12.196: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/framework/framework.go:151
Oct 19 01:55:12.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-1334" for this suite.
... skipping 943 lines ...
STEP: Building a namespace api object, basename container-runtime
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-1425
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:691
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Oct 19 01:55:17.522: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
... skipping 399 lines ...
Oct 19 01:54:11.393: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-7950
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:691
STEP: creating the pod
Oct 19 01:54:11.767: INFO: PodSpec: initContainers in spec.initContainers
Oct 19 01:55:22.615: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-cb3716eb-f898-4a5d-8398-35926578f2fd", GenerateName:"", Namespace:"init-container-7950", SelfLink:"/api/v1/namespaces/init-container-7950/pods/pod-init-cb3716eb-f898-4a5d-8398-35926578f2fd", UID:"8f1c328e-7d98-468d-b08d-251f7a182e72", ResourceVersion:"18064", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63707046851, loc:(*time.Location)(0x83d1280)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"767624130"}, Annotations:map[string]string{"kubernetes.io/psp":"e2e-test-privileged-psp"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-rrf86", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0008d9780), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rrf86", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rrf86", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rrf86", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001f91888), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"e2e-b3be4e167f-abe28-minion-group-92xt", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001d1c1e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f91900)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f91920)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001f91928), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001f9192c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63707046851, loc:(*time.Location)(0x83d1280)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63707046851, loc:(*time.Location)(0x83d1280)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63707046851, loc:(*time.Location)(0x83d1280)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63707046851, loc:(*time.Location)(0x83d1280)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.40.0.5", PodIP:"10.64.1.198", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.64.1.198"}}, StartTime:(*v1.Time)(0xc001f68a40), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0009d58f0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0009d5960)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"docker://dda99a8fa06c2993f1916f4ec7f865977be9f7e3c0f79cc653c39c6a0beee90c", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001f68ac0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001f68a80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc001f919af)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:151
Oct 19 01:55:22.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7950" for this suite.


• [SLOW TEST:71.349 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:686
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:691
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:98
Oct 19 01:55:22.744: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
... skipping 294 lines ...
Oct 19 01:55:00.709: INFO: stdout: ""
Oct 19 01:55:00.709: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.29.49 --kubeconfig=/workspace/.kube/config exec --namespace=volume-1685 gcepd-client -- /bin/sh -c test -b /opt/0'
Oct 19 01:55:02.514: INFO: rc: 1
STEP: cleaning the environment after gcepd
Oct 19 01:55:02.514: INFO: Deleting pod "gcepd-client" in namespace "volume-1685"
Oct 19 01:55:02.559: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
Oct 19 01:55:20.276: INFO: error deleting PD "e2e-b3be4e167f-abe28-1be3bf14-157e-4a43-9dbd-739823f8c767": googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/disks/e2e-b3be4e167f-abe28-1be3bf14-157e-4a43-9dbd-739823f8c767' is already being used by 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/instances/e2e-b3be4e167f-abe28-minion-group-69t5', resourceInUseByAnotherResource
Oct 19 01:55:20.276: INFO: Couldn't delete PD "e2e-b3be4e167f-abe28-1be3bf14-157e-4a43-9dbd-739823f8c767", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/disks/e2e-b3be4e167f-abe28-1be3bf14-157e-4a43-9dbd-739823f8c767' is already being used by 'projects/k8s-jkns-e2e-gce-serial-1-2/zones/us-west1-b/instances/e2e-b3be4e167f-abe28-minion-group-69t5', resourceInUseByAnotherResource
Oct 19 01:55:27.870: INFO: Successfully deleted PD "e2e-b3be4e167f-abe28-1be3bf14-157e-4a43-9dbd-739823f8c767".
Oct 19 01:55:27.870: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/framework/framework.go:151
Oct 19 01:55:27.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-1685" for this suite.
... skipping 628 lines ...
STEP: Creating the service on top of the pods in kubernetes
Oct 19 01:54:35.632: INFO: Service node-port-service in namespace nettest-4452 found.
Oct 19 01:54:35.765: INFO: Service session-affinity-service in namespace nettest-4452 found.
STEP: dialing(http) 34.83.81.62 (node) --> 10.0.197.63:80 (config.clusterIP)
Oct 19 01:54:35.836: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.0.197.63:80/hostName | grep -v '^\s*$'] Namespace:nettest-4452 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:35.837: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:37.325: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.0.197.63:80/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 19 01:54:37.325: INFO: Waiting for [netserver-0 netserver-1 netserver-2] endpoints (expected=[netserver-0 netserver-1 netserver-2], actual=[])
Oct 19 01:54:39.370: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.0.197.63:80/hostName | grep -v '^\s*$'] Namespace:nettest-4452 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:39.371: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:41.331: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.0.197.63:80/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 19 01:54:41.331: INFO: Waiting for [netserver-0 netserver-1 netserver-2] endpoints (expected=[netserver-0 netserver-1 netserver-2], actual=[])
Oct 19 01:54:43.393: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.0.197.63:80/hostName | grep -v '^\s*$'] Namespace:nettest-4452 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:43.394: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:45.121: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.0.197.63:80/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 19 01:54:45.121: INFO: Waiting for [netserver-0 netserver-1 netserver-2] endpoints (expected=[netserver-0 netserver-1 netserver-2], actual=[])
Oct 19 01:54:47.179: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.0.197.63:80/hostName | grep -v '^\s*$'] Namespace:nettest-4452 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:47.179: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 19 01:54:48.140: INFO: Waiting for [netserver-1 netserver-2] endpoints (expected=[netserver-0 netserver-1 netserver-2], actual=[netserver-0])
Oct 19 01:54:50.187: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.0.197.63:80/hostName | grep -v '^\s*$'] Namespace:nettest-4452 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 19 01:54:50.187: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 325 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:151
------------------------------
... skipping 605 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:151
------------------------------
... skipping 353 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [openstack] (not gce)

      test/e2e/storage/drivers/in_tree.go:1019
------------------------------
... skipping 414 lines ...
  test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 19 01:56:01.337: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in topology-425
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  test/e2e/storage/testsuites/topology.go:192
Oct 19 01:56:01.728: INFO: found topology map[failure-domain.beta.kubernetes.io/zone:us-west1-b]
Oct 19 01:56:01.856: INFO: Node name not specified for getVolumeOpCounts, falling back to listing nodes from API Server
Oct 19 01:56:04.376: INFO: Node name not specified for getVolumeOpCounts, falling back to listing nodes from API Server
Oct 19 01:56:05.294: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
... skipping 9 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 925 lines ...
      test/e2e/storage/testsuites/subpath.go:205

      Driver local doesn't support InlineVolume -- skipping

      test/e2e/storage/testsuites/base.go:151
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:163","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","time":"2019-10-19T01:56:23Z"}
Traceback (most recent call last):
  File "../test-infra/scenarios/kubernetes_e2e.py", line 778, in <module>
    main(parse_args())
  File "../test-infra/scenarios/kubernetes_e2e.py", line 626, in main
    mode.start(runner_args)
  File "../test-infra/scenarios/kubernetes_e2e.py", line 262, in start
... skipping 68 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:151
------------------------------
... skipping 313 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:97
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:151
------------------------------
... skipping 63 lines ...
Oct 19 01:56:23.577: INFO: Got stdout from 34.83.89.123:22: Hello from prow@e2e-b3be4e167f-abe28-minion-group-92xt
STEP: SSH'ing to 1 nodes and running echo "foo" | grep "bar"
STEP: SSH'ing to 1 nodes and running echo "stdout" && echo "stderr" >&2 && exit 7
Oct 19 01:56:24.459: INFO: Got stdout from 34.83.81.62:22: stdout
Oct 19 01:56:24.459: INFO: Got stderr from 34.83.81.62:22: stderr
STEP: SSH'ing to a nonexistent host
error dialing prow@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying
[AfterEach] [k8s.io] [sig-node] SSH
  test/e2e/framework/framework.go:151
Oct 19 01:56:29.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ssh-5326" for this suite.


... skipping 453 lines ...
Oct 19 01:56:28.150: INFO: Waiting for PV local-pvqzs6f to bind to PVC pvc-dmpqj
Oct 19 01:56:28.150: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-dmpqj] to have phase Bound
Oct 19 01:56:28.202: INFO: PersistentVolumeClaim pvc-dmpqj found but phase is Pending instead of Bound.
Oct 19 01:56:30.247: INFO: PersistentVolumeClaim pvc-dmpqj found and phase=Bound (2.096854411s)
Oct 19 01:56:30.247: INFO: Waiting up to 3m0s for PersistentVolume local-pvqzs6f to have phase Bound
Oct 19 01:56:30.284: INFO: PersistentVolume local-pvqzs6f found and phase=Bound (36.751119ms)
[It] should fail scheduling due to different NodeSelector
  test/e2e/storage/persistent_volumes-local.go:363
STEP: local-volume-type: dir
STEP: Initializing test volumes
Oct 19 01:56:30.360: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.29.49 --kubeconfig=/workspace/.kube/config exec --namespace=persistent-local-volumes-test-6534 hostexec-e2e-b3be4e167f-abe28-minion-group-4hd0-hlbst -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6aa4232b-2e60-4771-b608-9399fd3c1bf5'
Oct 19 01:56:31.291: INFO: stderr: ""
Oct 19 01:56:31.291: INFO: stdout: ""
... skipping 24 lines ...

• [SLOW TEST:20.772 seconds]
[sig-storage] PersistentVolumes-local 
test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  test/e2e/storage/persistent_volumes-local.go:337
    should fail scheduling due to different NodeSelector
    test/e2e/storage/persistent_volumes-local.go:363
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/storage/testsuites/base.go:98
Oct 19 01:56:32.587: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
... skipping 251 lines ...