This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2019-09-19 11:00
Elapsed28m35s
Revisiondcad3f66af56c4db702605c492c0fcedd64d5e37
Refs 82703

No Test Failures!


Error lines from build-log.txt

... skipping 139 lines ...
INFO: 5212 processes: 5133 remote cache hit, 29 processwrapper-sandbox, 50 remote.
INFO: Build completed successfully, 5305 total actions
INFO: Build completed successfully, 5305 total actions
make: Leaving directory '/home/prow/go/src/k8s.io/kubernetes'
2019/09/19 11:06:26 process.go:155: Step 'make -C /home/prow/go/src/k8s.io/kubernetes bazel-release' finished in 6m3.51543758s
2019/09/19 11:06:26 util.go:255: Flushing memory.
2019/09/19 11:06:27 util.go:265: flushMem error (page cache): exit status 1
2019/09/19 11:06:27 process.go:153: Running: /home/prow/go/src/k8s.io/release/push-build.sh --nomock --verbose --noupdatelatest --bucket=kubernetes-release-pull --ci --gcs-suffix=/pull-kubernetes-e2e-gce --allow-dup
push-build.sh: BEGIN main on 4c748fb0-dacc-11e9-876c-4a2f31b5c522 Thu Sep 19 11:06:27 UTC 2019

$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
INFO: Invocation ID: cdbcca4b-60c5-4dcc-8fd3-7954b51e8ee4
Loading: 
... skipping 848 lines ...
Trying to find master named 'e2e-6b50171459-abe28-master'
Looking for address 'e2e-6b50171459-abe28-master-ip'
Using master: e2e-6b50171459-abe28-master (external IP: 35.199.155.26; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

...........Kubernetes cluster created.
Cluster "k8s-gce-serial-1-5_e2e-6b50171459-abe28" set.
User "k8s-gce-serial-1-5_e2e-6b50171459-abe28" set.
Context "k8s-gce-serial-1-5_e2e-6b50171459-abe28" created.
Switched to context "k8s-gce-serial-1-5_e2e-6b50171459-abe28".
... skipping 1931 lines ...
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Sep 19 11:19:39.183: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e84f31c6-f6f6-456c-bc43-69cddbf6f9fc"
Sep 19 11:19:39.183: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e84f31c6-f6f6-456c-bc43-69cddbf6f9fc" in namespace "pods-9201" to be "terminated due to deadline exceeded"
Sep 19 11:19:39.221: INFO: Pod "pod-update-activedeadlineseconds-e84f31c6-f6f6-456c-bc43-69cddbf6f9fc": Phase="Running", Reason="", readiness=true. Elapsed: 38.866923ms
Sep 19 11:19:41.262: INFO: Pod "pod-update-activedeadlineseconds-e84f31c6-f6f6-456c-bc43-69cddbf6f9fc": Phase="Running", Reason="", readiness=true. Elapsed: 2.078958449s
Sep 19 11:19:43.302: INFO: Pod "pod-update-activedeadlineseconds-e84f31c6-f6f6-456c-bc43-69cddbf6f9fc": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.119272442s
Sep 19 11:19:43.302: INFO: Pod "pod-update-activedeadlineseconds-e84f31c6-f6f6-456c-bc43-69cddbf6f9fc" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:152
Sep 19 11:19:43.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9201" for this suite.
Sep 19 11:19:49.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 2264 lines ...
Sep 19 11:20:30.655: INFO: Pod exec-volume-test-gcepd-preprovisionedpv-fv86 no longer exists
STEP: Deleting pod exec-volume-test-gcepd-preprovisionedpv-fv86
Sep 19 11:20:30.655: INFO: Deleting pod "exec-volume-test-gcepd-preprovisionedpv-fv86" in namespace "volume-6865"
STEP: Deleting pv and pvc
Sep 19 11:20:30.761: INFO: Deleting PersistentVolumeClaim "pvc-r5f7t"
Sep 19 11:20:30.836: INFO: Deleting PersistentVolume "gcepd-d8zs9"
Sep 19 11:20:32.135: INFO: error deleting PD "e2e-6b50171459-abe28-45074561-0da2-4e62-a70d-d61b738993c5": googleapi: Error 400: The disk resource 'projects/k8s-gce-serial-1-5/zones/us-west1-b/disks/e2e-6b50171459-abe28-45074561-0da2-4e62-a70d-d61b738993c5' is already being used by 'projects/k8s-gce-serial-1-5/zones/us-west1-b/instances/e2e-6b50171459-abe28-minion-group-10wg', resourceInUseByAnotherResource
Sep 19 11:20:32.135: INFO: Couldn't delete PD "e2e-6b50171459-abe28-45074561-0da2-4e62-a70d-d61b738993c5", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-serial-1-5/zones/us-west1-b/disks/e2e-6b50171459-abe28-45074561-0da2-4e62-a70d-d61b738993c5' is already being used by 'projects/k8s-gce-serial-1-5/zones/us-west1-b/instances/e2e-6b50171459-abe28-minion-group-10wg', resourceInUseByAnotherResource
Sep 19 11:20:38.537: INFO: error deleting PD "e2e-6b50171459-abe28-45074561-0da2-4e62-a70d-d61b738993c5": googleapi: Error 400: The disk resource 'projects/k8s-gce-serial-1-5/zones/us-west1-b/disks/e2e-6b50171459-abe28-45074561-0da2-4e62-a70d-d61b738993c5' is already being used by 'projects/k8s-gce-serial-1-5/zones/us-west1-b/instances/e2e-6b50171459-abe28-minion-group-10wg', resourceInUseByAnotherResource
Sep 19 11:20:38.537: INFO: Couldn't delete PD "e2e-6b50171459-abe28-45074561-0da2-4e62-a70d-d61b738993c5", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-serial-1-5/zones/us-west1-b/disks/e2e-6b50171459-abe28-45074561-0da2-4e62-a70d-d61b738993c5' is already being used by 'projects/k8s-gce-serial-1-5/zones/us-west1-b/instances/e2e-6b50171459-abe28-minion-group-10wg', resourceInUseByAnotherResource
Sep 19 11:20:44.750: INFO: error deleting PD "e2e-6b50171459-abe28-45074561-0da2-4e62-a70d-d61b738993c5": googleapi: Error 400: The disk resource 'projects/k8s-gce-serial-1-5/zones/us-west1-b/disks/e2e-6b50171459-abe28-45074561-0da2-4e62-a70d-d61b738993c5' is already being used by 'projects/k8s-gce-serial-1-5/zones/us-west1-b/instances/e2e-6b50171459-abe28-minion-group-10wg', resourceInUseByAnotherResource
Sep 19 11:20:44.750: INFO: Couldn't delete PD "e2e-6b50171459-abe28-45074561-0da2-4e62-a70d-d61b738993c5", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-serial-1-5/zones/us-west1-b/disks/e2e-6b50171459-abe28-45074561-0da2-4e62-a70d-d61b738993c5' is already being used by 'projects/k8s-gce-serial-1-5/zones/us-west1-b/instances/e2e-6b50171459-abe28-minion-group-10wg', resourceInUseByAnotherResource
Sep 19 11:20:52.578: INFO: Successfully deleted PD "e2e-6b50171459-abe28-45074561-0da2-4e62-a70d-d61b738993c5".
Sep 19 11:20:52.578: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/framework/framework.go:152
Sep 19 11:20:52.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-6865" for this suite.
... skipping 1987 lines ...
Sep 19 11:21:04.102: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep 19 11:21:04.102: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.199.155.26 --kubeconfig=/workspace/.kube/config describe pod redis-master-qsv2x --namespace=kubectl-4243'
Sep 19 11:21:04.593: INFO: stderr: ""
Sep 19 11:21:04.593: INFO: stdout: "Name:         redis-master-qsv2x\nNamespace:    kubectl-4243\nPriority:     0\nNode:         e2e-6b50171459-abe28-minion-group-10wg/10.40.0.5\nStart Time:   Thu, 19 Sep 2019 11:20:38 +0000\nLabels:       app=redis\n              role=master\nAnnotations:  kubernetes.io/psp: e2e-test-privileged-psp\nStatus:       Running\nIP:           10.64.2.36\nIPs:\n  IP:           10.64.2.36\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://4328cd538d203ce043b545d4a3d028658b4a63e7a0a8393540688a999e45fca4\n    Image:          docker.io/library/redis:5.0.5-alpine\n    Image ID:       docker-pullable://redis@sha256:a606eaca41c3c69c7d2c8a142ec445e71156bae8526ae7970f62b6399e57761c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 19 Sep 2019 11:21:01 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vz5s4 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-vz5s4:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-vz5s4\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                                             Message\n  ----    ------     ----       ----                                             -------\n  Normal  Scheduled  <unknown>  default-scheduler                                Successfully assigned kubectl-4243/redis-master-qsv2x to e2e-6b50171459-abe28-minion-group-10wg\n  Normal  Pulling    17s        kubelet, e2e-6b50171459-abe28-minion-group-10wg  Pulling image \"docker.io/library/redis:5.0.5-alpine\"\n  Normal  Pulled     4s         kubelet, e2e-6b50171459-abe28-minion-group-10wg  Successfully pulled image \"docker.io/library/redis:5.0.5-alpine\"\n  Normal  Created    4s         kubelet, e2e-6b50171459-abe28-minion-group-10wg  Created container redis-master\n  Normal  Started    3s         kubelet, e2e-6b50171459-abe28-minion-group-10wg  Started container redis-master\n"
Sep 19 11:21:04.594: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.199.155.26 --kubeconfig=/workspace/.kube/config describe rc redis-master --namespace=kubectl-4243'
Sep 19 11:21:05.420: INFO: stderr: ""
Sep 19 11:21:05.420: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-4243\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        docker.io/library/redis:5.0.5-alpine\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  27s   replication-controller  Created pod: redis-master-qsv2x\n"
Sep 19 11:21:05.420: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.199.155.26 --kubeconfig=/workspace/.kube/config describe service redis-master --namespace=kubectl-4243'
Sep 19 11:21:06.230: INFO: stderr: ""
Sep 19 11:21:06.230: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-4243\nLabels:            app=redis\n                   role=master\nAnnotations:       <none>\nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.0.124.79\nPort:              <unset>  6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.64.2.36:6379\nSession Affinity:  None\nEvents:            <none>\n"
Sep 19 11:21:06.279: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.199.155.26 --kubeconfig=/workspace/.kube/config describe node e2e-6b50171459-abe28-master'
Sep 19 11:21:07.120: INFO: stderr: ""
Sep 19 11:21:07.120: INFO: stdout: "Name:               e2e-6b50171459-abe28-master\nRoles:              <none>\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=n1-standard-1\n                    beta.kubernetes.io/os=linux\n                    cloud.google.com/metadata-proxy-ready=true\n                    failure-domain.beta.kubernetes.io/region=us-west1\n                    failure-domain.beta.kubernetes.io/zone=us-west1-b\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=e2e-6b50171459-abe28-master\n                    kubernetes.io/os=linux\nAnnotations:        node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Thu, 19 Sep 2019 11:17:57 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\n                    node.kubernetes.io/unschedulable:NoSchedule\nUnschedulable:      true\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Thu, 19 Sep 2019 11:18:13 +0000   Thu, 19 Sep 2019 11:18:13 +0000   RouteCreated                 RouteController created a route\n  MemoryPressure       False   Thu, 19 Sep 2019 11:20:48 +0000   Thu, 19 Sep 2019 11:17:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Thu, 19 Sep 2019 11:20:48 +0000   Thu, 19 Sep 2019 11:17:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Thu, 19 Sep 2019 11:20:48 +0000   Thu, 19 Sep 2019 11:17:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Thu, 19 Sep 2019 11:20:48 +0000   Thu, 19 Sep 2019 11:17:58 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   10.40.0.2\n  ExternalIP:   35.199.155.26\n  InternalDNS:  e2e-6b50171459-abe28-master.c.k8s-gce-serial-1-5.internal\n  Hostname:     e2e-6b50171459-abe28-master.c.k8s-gce-serial-1-5.internal\nCapacity:\n  attachable-volumes-gce-pd:  127\n  cpu:                        1\n  ephemeral-storage:          16293736Ki\n  hugepages-2Mi:              0\n  memory:                     3787520Ki\n  pods:                       110\nAllocatable:\n  attachable-volumes-gce-pd:  127\n  cpu:                        1\n  ephemeral-storage:          15016307073\n  hugepages-2Mi:              0\n  memory:                     3531520Ki\n  pods:                       110\nSystem Info:\n  Machine ID:                 ab35ee211f2a10f01def5bf5db1c44f7\n  System UUID:                AB35EE21-1F2A-10F0-1DEF-5BF5DB1C44F7\n  Boot ID:                    9ff81b66-0c52-43af-a2db-fa1a3fdf86e3\n  Kernel Version:             4.14.94+\n  OS Image:                   Container-Optimized OS from Google\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.3\n  Kubelet Version:            v1.17.0-alpha.0.1563+f9a484c52c5902\n  Kube-Proxy Version:         v1.17.0-alpha.0.1563+f9a484c52c5902\nPodCIDR:                      10.64.1.0/24\nPodCIDRs:                     10.64.1.0/24\nProviderID:                   gce://k8s-gce-serial-1-5/us-west1-b/e2e-6b50171459-abe28-master\nNon-terminated Pods:          (10 in total)\n  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---\n  kube-system                 etcd-empty-dir-cleanup-e2e-6b50171459-abe28-master     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s\n  kube-system                 etcd-server-e2e-6b50171459-abe28-master                200m (20%)    0 (0%)      0 (0%)           0 (0%)         2m24s\n  kube-system                 etcd-server-events-e2e-6b50171459-abe28-master         100m (10%)    0 (0%)      0 (0%)           0 (0%)         2m11s\n  kube-system                 fluentd-gcp-v3.2.0-kpld9                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s\n  kube-system                 kube-addon-manager-e2e-6b50171459-abe28-master         5m (0%)       0 (0%)      50Mi (1%)        0 (0%)         2m20s\n  kube-system                 kube-apiserver-e2e-6b50171459-abe28-master             250m (25%)    0 (0%)      0 (0%)           0 (0%)         2m28s\n  kube-system                 kube-controller-manager-e2e-6b50171459-abe28-master    200m (20%)    0 (0%)      0 (0%)           0 (0%)         2m43s\n  kube-system                 kube-scheduler-e2e-6b50171459-abe28-master             75m (7%)      0 (0%)      0 (0%)           0 (0%)         2m44s\n  kube-system                 l7-lb-controller-v1.2.3-e2e-6b50171459-abe28-master    10m (1%)      0 (0%)      50Mi (1%)        0 (0%)         2m6s\n  kube-system                 metadata-proxy-v0.1-55x6q                              32m (3%)      32m (3%)    45Mi (1%)        45Mi (1%)      3m9s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                   Requests    Limits\n  --------                   --------    ------\n  cpu                        872m (87%)  32m (3%)\n  memory                     145Mi (4%)  45Mi (1%)\n  ephemeral-storage          0 (0%)      0 (0%)\n  attachable-volumes-gce-pd  0           0\nEvents:                      <none>\n"
... skipping 1030 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/common/sysctl.go:63
[It] should support unsafe sysctls which are actually whitelisted
  test/e2e/common/sysctl.go:110
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/framework/framework.go:152
... skipping 504 lines ...
Sep 19 11:21:50.987: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.199.155.26 --kubeconfig=/workspace/.kube/config explain e2e-test-crd-publish-openapi-2557-crds.spec'
Sep 19 11:21:51.743: INFO: stderr: ""
Sep 19 11:21:51.743: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2557-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Sep 19 11:21:51.744: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.199.155.26 --kubeconfig=/workspace/.kube/config explain e2e-test-crd-publish-openapi-2557-crds.spec.bars'
Sep 19 11:21:52.354: INFO: stderr: ""
Sep 19 11:21:52.354: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2557-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Sep 19 11:21:52.355: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.199.155.26 --kubeconfig=/workspace/.kube/config explain e2e-test-crd-publish-openapi-2557-crds.spec.bars2'
Sep 19 11:21:52.998: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:152
Sep 19 11:21:59.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6179" for this suite.
... skipping 414 lines ...
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 19 11:21:37.362: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-9452
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:698
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:152
Sep 19 11:22:05.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
Sep 19 11:22:15.974: INFO: namespace job-9452 deletion completed in 9.93015276s


• [SLOW TEST:38.612 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/storage/testsuites/base.go:93
... skipping 5029 lines ...
STEP: Building a namespace api object, basename container-runtime
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-6255
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:698
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Sep 19 11:23:22.796: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
... skipping 3142 lines ...
Sep 19 11:24:22.041: INFO: Pod exec-volume-test-gcepd-preprovisionedpv-d9rz no longer exists
STEP: Deleting pod exec-volume-test-gcepd-preprovisionedpv-d9rz
Sep 19 11:24:22.041: INFO: Deleting pod "exec-volume-test-gcepd-preprovisionedpv-d9rz" in namespace "volume-7640"
STEP: Deleting pv and pvc
Sep 19 11:24:22.093: INFO: Deleting PersistentVolumeClaim "pvc-brvjl"
Sep 19 11:24:22.149: INFO: Deleting PersistentVolume "gcepd-4z527"
Sep 19 11:24:23.700: INFO: error deleting PD "e2e-6b50171459-abe28-5e3a3384-112a-4a71-abe2-4c2681c7921c": googleapi: Error 400: The disk resource 'projects/k8s-gce-serial-1-5/zones/us-west1-b/disks/e2e-6b50171459-abe28-5e3a3384-112a-4a71-abe2-4c2681c7921c' is already being used by 'projects/k8s-gce-serial-1-5/zones/us-west1-b/instances/e2e-6b50171459-abe28-minion-group-n234', resourceInUseByAnotherResource
Sep 19 11:24:23.700: INFO: Couldn't delete PD "e2e-6b50171459-abe28-5e3a3384-112a-4a71-abe2-4c2681c7921c", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-serial-1-5/zones/us-west1-b/disks/e2e-6b50171459-abe28-5e3a3384-112a-4a71-abe2-4c2681c7921c' is already being used by 'projects/k8s-gce-serial-1-5/zones/us-west1-b/instances/e2e-6b50171459-abe28-minion-group-n234', resourceInUseByAnotherResource
Sep 19 11:24:30.546: INFO: error deleting PD "e2e-6b50171459-abe28-5e3a3384-112a-4a71-abe2-4c2681c7921c": googleapi: Error 400: The disk resource 'projects/k8s-gce-serial-1-5/zones/us-west1-b/disks/e2e-6b50171459-abe28-5e3a3384-112a-4a71-abe2-4c2681c7921c' is already being used by 'projects/k8s-gce-serial-1-5/zones/us-west1-b/instances/e2e-6b50171459-abe28-minion-group-n234', resourceInUseByAnotherResource
Sep 19 11:24:30.546: INFO: Couldn't delete PD "e2e-6b50171459-abe28-5e3a3384-112a-4a71-abe2-4c2681c7921c", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-serial-1-5/zones/us-west1-b/disks/e2e-6b50171459-abe28-5e3a3384-112a-4a71-abe2-4c2681c7921c' is already being used by 'projects/k8s-gce-serial-1-5/zones/us-west1-b/instances/e2e-6b50171459-abe28-minion-group-n234', resourceInUseByAnotherResource
Sep 19 11:24:38.046: INFO: Successfully deleted PD "e2e-6b50171459-abe28-5e3a3384-112a-4a71-abe2-4c2681c7921c".
Sep 19 11:24:38.046: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/framework/framework.go:152
Sep 19 11:24:38.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-7640" for this suite.
... skipping 266 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-5939
STEP: Creating statefulset with conflicting port in namespace statefulset-5939
STEP: Waiting until pod test-pod will start running in namespace statefulset-5939
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5939
Sep 19 11:24:11.796: INFO: Observed stateful pod in namespace: statefulset-5939, name: ss-0, uid: 4cc9e394-fd8b-4569-9d4a-ae039fdc0b57, status phase: Pending. Waiting for statefulset controller to delete.
Sep 19 11:24:11.881: INFO: Observed stateful pod in namespace: statefulset-5939, name: ss-0, uid: 4cc9e394-fd8b-4569-9d4a-ae039fdc0b57, status phase: Failed. Waiting for statefulset controller to delete.
Sep 19 11:24:11.963: INFO: Observed stateful pod in namespace: statefulset-5939, name: ss-0, uid: 4cc9e394-fd8b-4569-9d4a-ae039fdc0b57, status phase: Failed. Waiting for statefulset controller to delete.
Sep 19 11:24:12.002: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5939
STEP: Removing pod with conflicting port in namespace statefulset-5939
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5939 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:89
Sep 19 11:24:28.633: INFO: Deleting all statefulset in ns statefulset-5939
... skipping 722 lines ...
Sep 19 11:24:02.594: INFO: ssh prow@35.227.191.142:22: command:   sudo mkdir "/var/lib/kubelet/mount-propagation-1020"/host; sudo mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-1020"/host; echo host > "/var/lib/kubelet/mount-propagation-1020"/host/file
Sep 19 11:24:02.595: INFO: ssh prow@35.227.191.142:22: stdout:    ""
Sep 19 11:24:02.595: INFO: ssh prow@35.227.191.142:22: stderr:    ""
Sep 19 11:24:02.595: INFO: ssh prow@35.227.191.142:22: exit code: 0
Sep 19 11:24:02.634: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-1020 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 19 11:24:02.634: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 19 11:24:03.274: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Sep 19 11:24:03.313: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-1020 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 19 11:24:03.313: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 19 11:24:04.385: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Sep 19 11:24:04.425: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-1020 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 19 11:24:04.425: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 19 11:24:05.657: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Sep 19 11:24:05.702: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-1020 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 19 11:24:05.702: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 19 11:24:06.602: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Sep 19 11:24:06.650: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-1020 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 19 11:24:06.692: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 19 11:24:07.587: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Sep 19 11:24:07.632: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-1020 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 19 11:24:07.632: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 19 11:24:08.520: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Sep 19 11:24:08.560: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-1020 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 19 11:24:08.560: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 19 11:24:09.166: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Sep 19 11:24:09.206: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-1020 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 19 11:24:09.206: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 19 11:24:09.780: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Sep 19 11:24:09.834: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-1020 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 19 11:24:09.834: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 19 11:24:10.651: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Sep 19 11:24:10.697: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-1020 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 19 11:24:10.697: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 19 11:24:11.693: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Sep 19 11:24:11.768: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-1020 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 19 11:24:11.768: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 19 11:24:12.679: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Sep 19 11:24:12.733: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-1020 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 19 11:24:12.733: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 19 11:24:13.558: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Sep 19 11:24:13.612: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-1020 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 19 11:24:13.612: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 19 11:24:14.089: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Sep 19 11:24:14.134: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-1020 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 19 11:24:14.134: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 19 11:24:16.243: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Sep 19 11:24:16.292: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-1020 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 19 11:24:16.292: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 19 11:24:18.136: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Sep 19 11:24:18.186: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-1020 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 19 11:24:18.187: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 19 11:24:19.315: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Sep 19 11:24:19.353: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-1020 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 19 11:24:19.353: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 19 11:24:20.351: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Sep 19 11:24:20.392: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-1020 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 19 11:24:20.392: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 19 11:24:21.272: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Sep 19 11:24:21.312: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-1020 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 19 11:24:21.313: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 19 11:24:22.334: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Sep 19 11:24:22.376: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-1020 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 19 11:24:22.376: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 19 11:24:23.491: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Sep 19 11:24:23.497: INFO: Getting external IP address for e2e-6b50171459-abe28-minion-group-10wg
Sep 19 11:24:23.497: INFO: SSH "test `cat \"/var/lib/kubelet/mount-propagation-1020\"/master/file` = master" on e2e-6b50171459-abe28-minion-group-10wg(35.227.191.142:22)
Sep 19 11:24:23.952: INFO: ssh prow@35.227.191.142:22: command:   test `cat "/var/lib/kubelet/mount-propagation-1020"/master/file` = master
Sep 19 11:24:23.952: INFO: ssh prow@35.227.191.142:22: stdout:    ""
Sep 19 11:24:23.952: INFO: ssh prow@35.227.191.142:22: stderr:    ""
Sep 19 11:24:23.953: INFO: ssh prow@35.227.191.142:22: exit code: 0
... skipping 2335 lines ...
Sep 19 11:25:12.151: INFO: Unable to read jessie_udp@dns-test-service.dns-8893 from pod dns-8893/dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a: the server could not find the requested resource (get pods dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a)
Sep 19 11:25:12.244: INFO: Unable to read jessie_tcp@dns-test-service.dns-8893 from pod dns-8893/dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a: the server could not find the requested resource (get pods dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a)
Sep 19 11:25:12.290: INFO: Unable to read jessie_udp@dns-test-service.dns-8893.svc from pod dns-8893/dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a: the server could not find the requested resource (get pods dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a)
Sep 19 11:25:12.346: INFO: Unable to read jessie_tcp@dns-test-service.dns-8893.svc from pod dns-8893/dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a: the server could not find the requested resource (get pods dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a)
Sep 19 11:25:12.391: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8893.svc from pod dns-8893/dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a: the server could not find the requested resource (get pods dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a)
Sep 19 11:25:12.444: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8893.svc from pod dns-8893/dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a: the server could not find the requested resource (get pods dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a)
Sep 19 11:25:12.766: INFO: Lookups using dns-8893/dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8893 wheezy_tcp@dns-test-service.dns-8893 wheezy_udp@dns-test-service.dns-8893.svc wheezy_tcp@dns-test-service.dns-8893.svc wheezy_udp@_http._tcp.dns-test-service.dns-8893.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8893.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8893 jessie_tcp@dns-test-service.dns-8893 jessie_udp@dns-test-service.dns-8893.svc jessie_tcp@dns-test-service.dns-8893.svc jessie_udp@_http._tcp.dns-test-service.dns-8893.svc jessie_tcp@_http._tcp.dns-test-service.dns-8893.svc]

Sep 19 11:25:17.849: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8893/dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a: the server could not find the requested resource (get pods dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a)
Sep 19 11:25:17.917: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8893/dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a: the server could not find the requested resource (get pods dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a)
Sep 19 11:25:18.020: INFO: Unable to read wheezy_udp@dns-test-service.dns-8893 from pod dns-8893/dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a: the server could not find the requested resource (get pods dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a)
Sep 19 11:25:18.132: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8893 from pod dns-8893/dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a: the server could not find the requested resource (get pods dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a)
Sep 19 11:25:18.247: INFO: Unable to read wheezy_udp@dns-test-service.dns-8893.svc from pod dns-8893/dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a: the server could not find the requested resource (get pods dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a)
... skipping 5 lines ...
Sep 19 11:25:19.267: INFO: Unable to read jessie_udp@dns-test-service.dns-8893 from pod dns-8893/dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a: the server could not find the requested resource (get pods dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a)
Sep 19 11:25:19.309: INFO: Unable to read jessie_tcp@dns-test-service.dns-8893 from pod dns-8893/dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a: the server could not find the requested resource (get pods dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a)
Sep 19 11:25:19.381: INFO: Unable to read jessie_udp@dns-test-service.dns-8893.svc from pod dns-8893/dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a: the server could not find the requested resource (get pods dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a)
Sep 19 11:25:19.428: INFO: Unable to read jessie_tcp@dns-test-service.dns-8893.svc from pod dns-8893/dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a: the server could not find the requested resource (get pods dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a)
Sep 19 11:25:19.472: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8893.svc from pod dns-8893/dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a: the server could not find the requested resource (get pods dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a)
Sep 19 11:25:19.533: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8893.svc from pod dns-8893/dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a: the server could not find the requested resource (get pods dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a)
Sep 19 11:25:19.849: INFO: Lookups using dns-8893/dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8893 wheezy_tcp@dns-test-service.dns-8893 wheezy_udp@dns-test-service.dns-8893.svc wheezy_tcp@dns-test-service.dns-8893.svc wheezy_udp@_http._tcp.dns-test-service.dns-8893.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8893.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8893 jessie_tcp@dns-test-service.dns-8893 jessie_udp@dns-test-service.dns-8893.svc jessie_tcp@dns-test-service.dns-8893.svc jessie_udp@_http._tcp.dns-test-service.dns-8893.svc jessie_tcp@_http._tcp.dns-test-service.dns-8893.svc]

Sep 19 11:25:25.396: INFO: DNS probes using dns-8893/dns-test-f47c34e6-1ddc-462a-945e-2a438c3b9a3a succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 2163 lines ...
Sep 19 11:25:13.718: INFO: PersistentVolumeClaim csi-hostpathppgh4 found but phase is Pending instead of Bound.
Sep 19 11:25:15.774: INFO: PersistentVolumeClaim csi-hostpathppgh4 found but phase is Pending instead of Bound.
Sep 19 11:25:17.841: INFO: PersistentVolumeClaim csi-hostpathppgh4 found but phase is Pending instead of Bound.
Sep 19 11:25:19.884: INFO: PersistentVolumeClaim csi-hostpathppgh4 found but phase is Pending instead of Bound.
Sep 19 11:25:21.929: INFO: PersistentVolumeClaim csi-hostpathppgh4 found but phase is Pending instead of Bound.
Sep 19 11:25:24.094: INFO: PersistentVolumeClaim csi-hostpathppgh4 found but phase is Pending instead of Bound.
Sep 19 11:25:26.095: FAIL: Unexpected error:
    <*errors.errorString | 0xc002c53cb0>: {
        s: "PersistentVolumeClaims [csi-hostpathppgh4] not all in phase Bound within 5m0s",
    }
    PersistentVolumeClaims [csi-hostpathppgh4] not all in phase Bound within 5m0s
occurred
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "provisioning-861".
STEP: Found 141 events.
Sep 19 11:25:26.202: INFO: At 2019-09-19 11:20:22 +0000 UTC - event for csi-hostpath-attacher: {statefulset-controller } FailedCreate: create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods "csi-hostpath-attacher-0" is forbidden: unable to validate against any pod security policy: []
Sep 19 11:25:26.202: INFO: At 2019-09-19 11:20:23 +0000 UTC - event for csi-hostpath-provisioner: {statefulset-controller } FailedCreate: create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods "csi-hostpath-provisioner-0" is forbidden: unable to validate against any pod security policy: []
Sep 19 11:25:26.202: INFO: At 2019-09-19 11:20:23 +0000 UTC - event for csi-hostpathplugin: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful
Sep 19 11:25:26.202: INFO: At 2019-09-19 11:20:23 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.202: INFO: At 2019-09-19 11:20:24 +0000 UTC - event for csi-hostpath-attacher: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful
Sep 19 11:25:26.202: INFO: At 2019-09-19 11:20:24 +0000 UTC - event for csi-hostpath-provisioner: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful
Sep 19 11:25:26.202: INFO: At 2019-09-19 11:20:24 +0000 UTC - event for csi-hostpath-resizer: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:24 +0000 UTC - event for csi-hostpath-resizer: {statefulset-controller } FailedCreate: create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods "csi-hostpath-resizer-0" is forbidden: unable to validate against any pod security policy: []
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:24 +0000 UTC - event for csi-hostpathppgh4: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "csi-hostpath-provisioning-861" or manually created by system administrator
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:24 +0000 UTC - event for csi-snapshotter: {statefulset-controller } FailedCreate: create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods "csi-snapshotter-0" is forbidden: unable to validate against any pod security policy: []
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:24 +0000 UTC - event for csi-snapshotter: {statefulset-controller } SuccessfulCreate: create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:28 +0000 UTC - event for csi-hostpathplugin: {statefulset-controller } RecreatingFailedPod: StatefulSet provisioning-861/csi-hostpathplugin is recreating failed Pod csi-hostpathplugin-0
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:29 +0000 UTC - event for csi-hostpathplugin: {statefulset-controller } SuccessfulDelete: delete Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:29 +0000 UTC - event for csi-hostpathplugin: {statefulset-controller } FailedCreate: create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin failed error: The POST operation against Pod could not be completed at this time, please try again.
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:29 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:34 +0000 UTC - event for csi-snapshotter-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} Pulling: Pulling image "quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1"
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:36 +0000 UTC - event for csi-hostpath-attacher-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} Pulled: Container image "quay.io/k8scsi/csi-attacher:v1.2.0" already present on machine
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:36 +0000 UTC - event for csi-hostpath-resizer-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} Pulling: Pulling image "quay.io/k8scsi/csi-resizer:v0.2.0"
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:36 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:36 +0000 UTC - event for csi-snapshotter-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} Pulled: Successfully pulled image "quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1"
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:37 +0000 UTC - event for csi-snapshotter-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} Created: Created container csi-snapshotter
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:38 +0000 UTC - event for csi-hostpath-attacher-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} Created: Created container csi-attacher
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:38 +0000 UTC - event for csi-hostpath-provisioner-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} Pulled: Container image "quay.io/k8scsi/csi-provisioner:v1.4.0-rc1" already present on machine
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:39 +0000 UTC - event for csi-hostpath-provisioner-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} Created: Created container csi-provisioner
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:39 +0000 UTC - event for csi-hostpath-resizer-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} Created: Created container csi-resizer
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:39 +0000 UTC - event for csi-hostpath-resizer-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} Pulled: Successfully pulled image "quay.io/k8scsi/csi-resizer:v0.2.0"
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:39 +0000 UTC - event for csi-snapshotter-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} Started: Started container csi-snapshotter
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:40 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:42 +0000 UTC - event for csi-hostpath-attacher-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} Started: Started container csi-attacher
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:42 +0000 UTC - event for csi-hostpath-provisioner-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} Started: Started container csi-provisioner
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:42 +0000 UTC - event for csi-hostpath-resizer-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} Started: Started container csi-resizer
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:42 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:43 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:47 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:20:53 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:21:03 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:21:06 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:21:09 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:21:14 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:21:17 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:21:20 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:21:23 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:21:28 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:21:32 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:21:38 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:21:45 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:21:49 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:21:50 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:21:51 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:21:52 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:21:54 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:21:55 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:21:56 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:21:59 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:00 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:02 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:03 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:04 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:06 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:07 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:08 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:09 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:11 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:12 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:13 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:17 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:19 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:24 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:28 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:30 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:32 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:36 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:38 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:39 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:41 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:44 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:45 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:46 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:47 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:47 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:49 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:50 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:51 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:53 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:54 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:54 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.203: INFO: At 2019-09-19 11:22:55 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.204: INFO: At 2019-09-19 11:22:56 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.204: INFO: At 2019-09-19 11:22:57 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.204: INFO: At 2019-09-19 11:22:59 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.204: INFO: At 2019-09-19 11:23:00 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.204: INFO: At 2019-09-19 11:23:00 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.204: INFO: At 2019-09-19 11:23:02 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.204: INFO: At 2019-09-19 11:23:04 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.204: INFO: At 2019-09-19 11:23:06 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.204: INFO: At 2019-09-19 11:23:07 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.204: INFO: At 2019-09-19 11:23:09 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.204: INFO: At 2019-09-19 11:23:10 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.204: INFO: At 2019-09-19 11:23:13 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.204: INFO: At 2019-09-19 11:23:14 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.204: INFO: At 2019-09-19 11:23:16 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.204: INFO: At 2019-09-19 11:23:18 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.204: INFO: At 2019-09-19 11:23:20 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.204: INFO: At 2019-09-19 11:23:22 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:23:26 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:23:27 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:23:28 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:23:30 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:23:32 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:23:37 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:23:41 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:23:44 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:23:50 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:23:54 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:23:58 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:24:00 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:24:02 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:24:05 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:24:08 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:24:09 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:24:12 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:24:14 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:24:18 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:24:21 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:24:23 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:24:27 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:24:29 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:24:33 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:24:35 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:24:37 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:24:40 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:24:43 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:24:47 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:24:50 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:24:55 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:24:58 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:25:00 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:25:03 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:25:05 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:25:11 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.205: INFO: At 2019-09-19 11:25:18 +0000 UTC - event for csi-hostpathplugin-0: {kubelet e2e-6b50171459-abe28-minion-group-vsm4} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Sep 19 11:25:26.251: INFO: POD                         NODE                                    PHASE    GRACE  CONDITIONS
Sep 19 11:25:26.251: INFO: csi-hostpath-attacher-0     e2e-6b50171459-abe28-minion-group-vsm4  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-09-19 11:20:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-09-19 11:20:43 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-09-19 11:20:43 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-09-19 11:20:24 +0000 UTC  }]
Sep 19 11:25:26.251: INFO: csi-hostpath-provisioner-0  e2e-6b50171459-abe28-minion-group-vsm4  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-09-19 11:20:25 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-09-19 11:20:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-09-19 11:20:45 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-09-19 11:20:25 +0000 UTC  }]
Sep 19 11:25:26.252: INFO: csi-hostpath-resizer-0      e2e-6b50171459-abe28-minion-group-vsm4  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-09-19 11:20:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-09-19 11:20:42 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-09-19 11:20:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-09-19 11:20:24 +0000 UTC  }]
Sep 19 11:25:26.252: INFO: csi-hostpathplugin-0        e2e-6b50171459-abe28-minion-group-vsm4  Pending         []
Sep 19 11:25:26.252: INFO: csi-snapshotter-0           e2e-6b50171459-abe28-minion-group-vsm4  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-09-19 11:20:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-09-19 11:20:43 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-09-19 11:20:43 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-09-19 11:20:24 +0000 UTC  }]
Sep 19 11:25:26.252: INFO: 
Sep 19 11:25:26.369: INFO: 
Logging node info for node e2e-6b50171459-abe28-master
Sep 19 11:25:26.444: INFO: Node Info: &Node{ObjectMeta:{e2e-6b50171459-abe28-master   /api/v1/nodes/e2e-6b50171459-abe28-master 2792ef5d-9533-46aa-88c3-53b3af2f6ec5 10082 0 2019-09-19 11:17:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-6b50171459-abe28-master kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUse_ExternalID:,ProviderID:gce://k8s-gce-serial-1-5/us-west1-b/e2e-6b50171459-abe28-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3878420480 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3616276480 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-09-19 11:18:13 +0000 UTC,LastTransitionTime:2019-09-19 11:18:13 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-19 11:24:48 +0000 UTC,LastTransitionTime:2019-09-19 11:17:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-19 11:24:48 +0000 UTC,LastTransitionTime:2019-09-19 11:17:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-19 11:24:48 +0000 UTC,LastTransitionTime:2019-09-19 11:17:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-19 11:24:48 +0000 UTC,LastTransitionTime:2019-09-19 11:17:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.2,},NodeAddress{Type:ExternalIP,Address:35.199.155.26,},NodeAddress{Type:InternalDNS,Address:e2e-6b50171459-abe28-master.c.k8s-gce-serial-1-5.internal,},NodeAddress{Type:Hostname,Address:e2e-6b50171459-abe28-master.c.k8s-gce-serial-1-5.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ab35ee211f2a10f01def5bf5db1c44f7,SystemUUID:AB35EE21-1F2A-10F0-1DEF-5BF5DB1C44F7,BootID:9ff81b66-0c52-43af-a2db-fa1a3fdf86e3,KernelVersion:4.14.94+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://18.9.3,KubeletVersion:v1.17.0-alpha.0.1563+f9a484c52c5902,KubeProxyVersion:v1.17.0-alpha.0.1563+f9a484c52c5902,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.17.0-alpha.0.1563_f9a484c52c5902],SizeBytes:288817210,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.17.0-alpha.0.1563_f9a484c52c5902],SizeBytes:183836266,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.17.0-alpha.0.1563_f9a484c52c5902],SizeBytes:94195315,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager:v9.0.2],SizeBytes:83076028,},ContainerImage{Names:[k8s.gcr.io/etcd-empty-dir-cleanup@sha256:c60552d15b383b88a75d552816041cfe963edc3e4d9e8e378d7c0aae20cfb3f7 k8s.gcr.io/etcd-empty-dir-cleanup:3.3.15.0],SizeBytes:78265220,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-glbc-amd64@sha256:14f14351a03038b238232e60850a9cfa0dffbed0590321ef84216a432accc1ca k8s.gcr.io/ingress-gce-glbc-amd64:v1.2.3],SizeBytes:71797285,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Sep 19 11:25:26.444: INFO: 
Logging kubelet events for node e2e-6b50171459-abe28-master
Sep 19 11:25:26.517: INFO: 
Logging pods the kubelet thinks is on node e2e-6b50171459-abe28-master
Sep 19 11:25:26.844: INFO: etcd-server-events-e2e-6b50171459-abe28-master started at 2019-09-19 11:17:10 +0000 UTC (0+1 container statuses recorded)
Sep 19 11:25:26.844: INFO: 	Container etcd-container ready: true, restart count 0
... skipping 18 lines ...
Sep 19 11:25:26.844: INFO: 	Container metadata-proxy ready: true, restart count 0
Sep 19 11:25:26.844: INFO: 	Container prometheus-to-sd-exporter ready: true, restart count 0
Sep 19 11:25:27.122: INFO: 
Latency metrics for node e2e-6b50171459-abe28-master
Sep 19 11:25:27.131: INFO: 
Logging node info for node e2e-6b50171459-abe28-minion-group-10wg
Sep 19 11:25:27.206: INFO: Node Info: &Node{ObjectMeta:{e2e-6b50171459-abe28-minion-group-10wg   /api/v1/nodes/e2e-6b50171459-abe28-minion-group-10wg e096cc85-a7c2-4b2c-81da-a48a9ecb3be1 11058 0 2019-09-19 11:17:58 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-6b50171459-abe28-minion-group-10wg kubernetes.io/os:linux mounted_volume_expand:mounted-volume-expand-5922] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-3996":"e2e-6b50171459-abe28-minion-group-10wg","csi-hostpath-volume-expand-3059":"e2e-6b50171459-abe28-minion-group-10wg"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUse_ExternalID:,ProviderID:gce://k8s-gce-serial-1-5/us-west1-b/e2e-6b50171459-abe28-minion-group-10wg,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7841865728 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7579721728 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2019-09-19 11:25:08 +0000 UTC,LastTransitionTime:2019-09-19 11:18:03 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2019-09-19 11:25:08 +0000 UTC,LastTransitionTime:2019-09-19 11:18:03 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2019-09-19 11:25:08 +0000 UTC,LastTransitionTime:2019-09-19 11:18:03 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2019-09-19 11:25:08 +0000 UTC,LastTransitionTime:2019-09-19 11:18:03 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2019-09-19 11:25:08 +0000 UTC,LastTransitionTime:2019-09-19 11:18:03 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2019-09-19 11:25:08 +0000 UTC,LastTransitionTime:2019-09-19 11:18:03 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2019-09-19 11:25:08 +0000 UTC,LastTransitionTime:2019-09-19 11:18:03 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-09-19 11:18:27 +0000 UTC,LastTransitionTime:2019-09-19 11:18:27 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-19 11:25:23 +0000 UTC,LastTransitionTime:2019-09-19 11:17:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-19 11:25:23 +0000 UTC,LastTransitionTime:2019-09-19 11:17:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-19 11:25:23 +0000 UTC,LastTransitionTime:2019-09-19 11:17:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-19 11:25:23 +0000 UTC,LastTransitionTime:2019-09-19 11:17:59 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.5,},NodeAddress{Type:ExternalIP,Address:35.227.191.142,},NodeAddress{Type:InternalDNS,Address:e2e-6b50171459-abe28-minion-group-10wg.c.k8s-gce-serial-1-5.internal,},NodeAddress{Type:Hostname,Address:e2e-6b50171459-abe28-minion-group-10wg.c.k8s-gce-serial-1-5.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ac2a5d2f58dea82e305b2be25aa6ecf3,SystemUUID:AC2A5D2F-58DE-A82E-305B-2BE25AA6ECF3,BootID:558e20b9-b33e-44e7-bb13-1551b0d29eaf,KernelVersion:4.14.94+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://18.9.3,KubeletVersion:v1.17.0-alpha.0.1563+f9a484c52c5902,KubeProxyVersion:v1.17.0-alpha.0.1563+f9a484c52c5902,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[httpd@sha256:dfb792aa3fed0694d8361ec066c592381e9bdff508292eceedd1ce28fb020f71 httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.17.0-alpha.0.1563_f9a484c52c5902],SizeBytes:91611090,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[redis@sha256:a606eaca41c3c69c7d2c8a142ec445e71156bae8526ae7970f62b6399e57761c redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/apparmor-loader@sha256:1fdc224b826c4bc16b3cdf5c09d6e5b8c7aa77e2b2d81472a1316bd1606fa1bd gcr.io/kubernetes-e2e-test-images/apparmor-loader:1.0],SizeBytes:13090050,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b alpine:3.7],SizeBytes:4206494,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[kubernetes.io/gce-pd/e2e-6b50171459-abe28-d-pvc-4a482510-b34a-4b10-8f06-ebbbfea351c8 kubernetes.io/gce-pd/e2e-6b50171459-abe28-d-pvc-66eb133b-8b5b-4a92-84a3-e5958f714e97],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/gce-pd/e2e-6b50171459-abe28-d-pvc-4a482510-b34a-4b10-8f06-ebbbfea351c8,DevicePath:/dev/disk/by-id/google-e2e-6b50171459-abe28-d-pvc-4a482510-b34a-4b10-8f06-ebbbfea351c8,},AttachedVolume{Name:kubernetes.io/gce-pd/e2e-6b50171459-abe28-d-pvc-66eb133b-8b5b-4a92-84a3-e5958f714e97,DevicePath:/dev/disk/by-id/google-e2e-6b50171459-abe28-d-pvc-66eb133b-8b5b-4a92-84a3-e5958f714e97,},},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Sep 19 11:25:27.207: INFO: 
Logging kubelet events for node e2e-6b50171459-abe28-minion-group-10wg
Sep 19 11:25:27.263: INFO: 
Logging pods the kubelet thinks is on node e2e-6b50171459-abe28-minion-group-10wg
Sep 19 11:25:27.344: INFO: nfs-server started at 2019-09-19 11:25:07 +0000 UTC (0+1 container statuses recorded)
Sep 19 11:25:27.344: INFO: 	Container nfs-server ready: true, restart count 0
... skipping 33 lines ...
Sep 19 11:25:27.344: INFO: e2e-test-httpd-rc-q4mzt started at 2019-09-19 11:25:20 +0000 UTC (0+1 container statuses recorded)
Sep 19 11:25:27.344: INFO: 	Container e2e-test-httpd-rc ready: true, restart count 0
Sep 19 11:25:27.670: INFO: 
Latency metrics for node e2e-6b50171459-abe28-minion-group-10wg
Sep 19 11:25:27.670: INFO: 
Logging node info for node e2e-6b50171459-abe28-minion-group-n234
Sep 19 11:25:27.730: INFO: Node Info: &Node{ObjectMeta:{e2e-6b50171459-abe28-minion-group-n234   /api/v1/nodes/e2e-6b50171459-abe28-minion-group-n234 9197e5c9-a180-46ac-86ed-9f80aedbb113 10579 0 2019-09-19 11:17:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-6b50171459-abe28-minion-group-n234 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-volumemode-4517":"e2e-6b50171459-abe28-minion-group-n234"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUse_ExternalID:,ProviderID:gce://k8s-gce-serial-1-5/us-west1-b/e2e-6b50171459-abe28-minion-group-n234,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7841857536 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7579713536 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2019-09-19 11:25:08 +0000 UTC,LastTransitionTime:2019-09-19 11:18:03 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2019-09-19 11:25:08 +0000 UTC,LastTransitionTime:2019-09-19 11:18:03 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2019-09-19 11:25:08 +0000 UTC,LastTransitionTime:2019-09-19 11:18:03 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2019-09-19 11:25:08 +0000 UTC,LastTransitionTime:2019-09-19 11:18:03 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2019-09-19 11:25:08 +0000 UTC,LastTransitionTime:2019-09-19 11:18:03 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2019-09-19 11:25:08 +0000 UTC,LastTransitionTime:2019-09-19 11:18:03 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2019-09-19 11:25:08 +0000 UTC,LastTransitionTime:2019-09-19 11:18:03 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-09-19 11:18:13 +0000 UTC,LastTransitionTime:2019-09-19 11:18:13 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-19 11:25:00 +0000 UTC,LastTransitionTime:2019-09-19 11:17:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-19 11:25:00 +0000 UTC,LastTransitionTime:2019-09-19 11:17:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-19 11:25:00 +0000 UTC,LastTransitionTime:2019-09-19 11:17:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-19 11:25:00 +0000 UTC,LastTransitionTime:2019-09-19 11:17:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.3,},NodeAddress{Type:ExternalIP,Address:35.233.235.60,},NodeAddress{Type:InternalDNS,Address:e2e-6b50171459-abe28-minion-group-n234.c.k8s-gce-serial-1-5.internal,},NodeAddress{Type:Hostname,Address:e2e-6b50171459-abe28-minion-group-n234.c.k8s-gce-serial-1-5.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b36c7bd8a3ca855eb0c854caca659bed,SystemUUID:B36C7BD8-A3CA-855E-B0C8-54CACA659BED,BootID:acfc7919-7db9-4879-bbf1-b4855ec5f417,KernelVersion:4.14.94+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://18.9.3,KubeletVersion:v1.17.0-alpha.0.1563+f9a484c52c5902,KubeProxyVersion:v1.17.0-alpha.0.1563+f9a484c52c5902,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[httpd@sha256:dfb792aa3fed0694d8361ec066c592381e9bdff508292eceedd1ce28fb020f71 httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.17.0-alpha.0.1563_f9a484c52c5902],SizeBytes:91611090,},ContainerImage{Names:[k8s.gcr.io/fluentd-gcp-scaler@sha256:4f28f10fb89506768910b858f7a18ffb996824a16d70d5ac895e49687df9ff58 k8s.gcr.io/fluentd-gcp-scaler:0.5.2],SizeBytes:90498960,},ContainerImage{Names:[k8s.gcr.io/heapster-amd64@sha256:9fae0af136ce0cf4f88393b3670f7139ffc464692060c374d2ae748e13144521 k8s.gcr.io/heapster-amd64:v1.6.0-beta.1],SizeBytes:76016169,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[k8s.gcr.io/event-exporter@sha256:529ba8eb0637d9e1d2386a3b9ad10803f92b82b85d2cf6d54da27deac696967f k8s.gcr.io/event-exporter:v0.3.0],SizeBytes:51445475,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:44100963,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:e51071a0670e003e3936190cfda74cfd6edaa39e0d16fc0b5a5f09dbf09dd1ff k8s.gcr.io/metrics-server-amd64:v0.3.4],SizeBytes:39944451,},ContainerImage{Names:[k8s.gcr.io/addon-resizer@sha256:8075ed6db9baad249d9cf2656c0ecaad8d87133baf20286b1953dfb3fb06e75d k8s.gcr.io/addon-resizer:1.8.5],SizeBytes:35110823,},ContainerImage{Names:[redis@sha256:a606eaca41c3c69c7d2c8a142ec445e71156bae8526ae7970f62b6399e57761c redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64@sha256:d83d8a481145d0eb71f8bd71ae236d1c6a931dd3bdcaf80919a8ec4a4d8aff74 k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0],SizeBytes:13513083,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/metadata-concealment@sha256:e38f92502f6f6a175fb15b5ce2d02229d465b419d4a643f5999a0f88ee170dcc gcr.io/kubernetes-e2e-test-images/metadata-concealment:1.2],SizeBytes:5124686,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[kubernetes.io/gce-pd/e2e-6b50171459-abe28-d-pvc-bfa35a9b-14fe-46b5-8466-26ca6f684696],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/gce-pd/e2e-6b50171459-abe28-d-pvc-bfa35a9b-14fe-46b5-8466-26ca6f684696,DevicePath:/dev/disk/by-id/google-e2e-6b50171459-abe28-d-pvc-bfa35a9b-14fe-46b5-8466-26ca6f684696,},},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Sep 19 11:25:27.731: INFO: 
Logging kubelet events for node e2e-6b50171459-abe28-minion-group-n234
Sep 19 11:25:27.788: INFO: 
Logging pods the kubelet thinks is on node e2e-6b50171459-abe28-minion-group-n234
Sep 19 11:25:27.845: INFO: netserver-1 started at 2019-09-19 11:23:35 +0000 UTC (0+1 container statuses recorded)
Sep 19 11:25:27.845: INFO: 	Container webserver ready: true, restart count 0
... skipping 39 lines ...
Sep 19 11:25:27.845: INFO: ss-1 started at 2019-09-19 11:24:55 +0000 UTC (0+1 container statuses recorded)
Sep 19 11:25:27.845: INFO: 	Container webserver ready: true, restart count 0
Sep 19 11:25:28.093: INFO: 
Latency metrics for node e2e-6b50171459-abe28-minion-group-n234
Sep 19 11:25:28.106: INFO: 
Logging node info for node e2e-6b50171459-abe28-minion-group-vsm4
Sep 19 11:25:28.214: INFO: Node Info: &Node{ObjectMeta:{e2e-6b50171459-abe28-minion-group-vsm4   /api/v1/nodes/e2e-6b50171459-abe28-minion-group-vsm4 fd1b8ef3-0a9b-431c-a27a-c2420c49ad82 11006 0 2019-09-19 11:17:58 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-6b50171459-abe28-minion-group-vsm4 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2545":"e2e-6b50171459-abe28-minion-group-vsm4","csi-hostpath-provisioning-9695":"e2e-6b50171459-abe28-minion-group-vsm4","csi-hostpath-v0-provisioning-4351":"e2e-6b50171459-abe28-minion-group-vsm4","csi-hostpath-volume-expand-152":"e2e-6b50171459-abe28-minion-group-vsm4","csi-mock-csi-mock-volumes-3556":"csi-mock-csi-mock-volumes-3556"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUse_ExternalID:,ProviderID:gce://k8s-gce-serial-1-5/us-west1-b/e2e-6b50171459-abe28-minion-group-vsm4,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7841865728 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7579721728 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2019-09-19 11:25:08 +0000 UTC,LastTransitionTime:2019-09-19 11:18:03 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2019-09-19 11:25:08 +0000 UTC,LastTransitionTime:2019-09-19 11:18:03 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2019-09-19 11:25:08 +0000 UTC,LastTransitionTime:2019-09-19 11:18:03 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2019-09-19 11:25:08 +0000 UTC,LastTransitionTime:2019-09-19 11:18:03 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2019-09-19 11:25:08 +0000 UTC,LastTransitionTime:2019-09-19 11:18:03 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2019-09-19 11:25:08 +0000 UTC,LastTransitionTime:2019-09-19 11:18:03 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2019-09-19 11:25:08 +0000 UTC,LastTransitionTime:2019-09-19 11:18:03 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-09-19 11:18:27 +0000 UTC,LastTransitionTime:2019-09-19 11:18:27 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-19 11:25:22 +0000 UTC,LastTransitionTime:2019-09-19 11:17:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-19 11:25:22 +0000 UTC,LastTransitionTime:2019-09-19 11:17:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-19 11:25:22 +0000 UTC,LastTransitionTime:2019-09-19 11:17:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-19 11:25:22 +0000 UTC,LastTransitionTime:2019-09-19 11:17:59 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.4,},NodeAddress{Type:ExternalIP,Address:34.82.29.0,},NodeAddress{Type:InternalDNS,Address:e2e-6b50171459-abe28-minion-group-vsm4.c.k8s-gce-serial-1-5.internal,},NodeAddress{Type:Hostname,Address:e2e-6b50171459-abe28-minion-group-vsm4.c.k8s-gce-serial-1-5.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:308999d5f108b60a0b0c166ea476a248,SystemUUID:308999D5-F108-B60A-0B0C-166EA476A248,BootID:4db705e7-a359-4fcb-b3f8-ac4b27f0833a,KernelVersion:4.14.94+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://18.9.3,KubeletVersion:v1.17.0-alpha.0.1563+f9a484c52c5902,KubeProxyVersion:v1.17.0-alpha.0.1563+f9a484c52c5902,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:238318516,},ContainerImage{Names:[httpd@sha256:dfb792aa3fed0694d8361ec066c592381e9bdff508292eceedd1ce28fb020f71 httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.17.0-alpha.0.1563_f9a484c52c5902],SizeBytes:91611090,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:b23381325dc242cb761861514195efc600f6a403fcd20a2ac4c6b26704def66f quay.io/k8scsi/csi-provisioner:v0.4.1],SizeBytes:53938589,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:6fe3b8b1f8a1b3706fe740a278266cef53a9308d9b1ae736a46684c0d0cb52e1 quay.io/k8scsi/csi-attacher:v0.4.1],SizeBytes:49452185,},ContainerImage{Names:[quay.io/k8scsi/driver-registrar@sha256:b7c6cdbd3ad9e585f79365a5b9f5a83551629c2a3235ed0825d7629371963c10 quay.io/k8scsi/driver-registrar:v0.4.1],SizeBytes:47425903,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:41861013,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:a2db01cfd2ae1a16f0feef274160c659c1ac5aa433e1c514de20e334cb66c674 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:0aa496f3e7ff7240abbf306e4244a75c5e59cbf2e4dbc246a6db2ca1bc67c6b1 quay.io/k8scsi/hostpathplugin:v0.4.1],SizeBytes:18583451,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:11337839,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-9695^29979671-dad0-11e9-ba27-42010a280004 kubernetes.io/gce-pd/e2e-6b50171459-abe28-d-pvc-4bb43f90-a9be-4507-ae93-490f43a6d41d],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9695^29979671-dad0-11e9-ba27-42010a280004,DevicePath:,},AttachedVolume{Name:kubernetes.io/gce-pd/e2e-6b50171459-abe28-d-pvc-4bb43f90-a9be-4507-ae93-490f43a6d41d,DevicePath:/dev/disk/by-id/google-e2e-6b50171459-abe28-d-pvc-4bb43f90-a9be-4507-ae93-490f43a6d41d,},},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Sep 19 11:25:28.222: INFO: 
Logging kubelet events for node e2e-6b50171459-abe28-minion-group-vsm4
Sep 19 11:25:28.279: INFO: 
Logging pods the kubelet thinks is on node e2e-6b50171459-abe28-minion-group-vsm4
Sep 19 11:25:28.396: INFO: fluentd-gcp-v3.2.0-mbv2w started at 2019-09-19 11:17:59 +0000 UTC (0+2 container statuses recorded)
Sep 19 11:25:28.396: INFO: 	Container fluentd-gcp ready: true, restart count 0
... skipping 53 lines ...
  test/e2e/storage/csi_volumes.go:64
    [Testpattern: Dynamic PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:92
      should be able to unmount after the subpath directory is deleted [It]
      test/e2e/storage/testsuites/subpath.go:424

      Sep 19 11:25:26.095: Unexpected error:
          <*errors.errorString | 0xc002c53cb0>: {
              s: "PersistentVolumeClaims [csi-hostpathppgh4] not all in phase Bound within 5m0s",
          }
          PersistentVolumeClaims [csi-hostpathppgh4] not all in phase Bound within 5m0s
      occurred

... skipping 365 lines ...
Sep 19 11:25:56.654: INFO: Trying to get logs from node e2e-6b50171459-abe28-minion-group-10wg pod exec-volume-test-gcepd-fn5f container exec-container-gcepd-fn5f: <nil>
STEP: delete the pod
Sep 19 11:25:56.809: INFO: Waiting for pod exec-volume-test-gcepd-fn5f to disappear
Sep 19 11:25:56.902: INFO: Pod exec-volume-test-gcepd-fn5f no longer exists
STEP: Deleting pod exec-volume-test-gcepd-fn5f
Sep 19 11:25:56.902: INFO: Deleting pod "exec-volume-test-gcepd-fn5f" in namespace "volume-4489"
Sep 19 11:25:58.224: INFO: error deleting PD "e2e-6b50171459-abe28-8af32887-44e6-4959-a967-bcf18eecaffd": googleapi: Error 400: The disk resource 'projects/k8s-gce-serial-1-5/zones/us-west1-b/disks/e2e-6b50171459-abe28-8af32887-44e6-4959-a967-bcf18eecaffd' is already being used by 'projects/k8s-gce-serial-1-5/zones/us-west1-b/instances/e2e-6b50171459-abe28-minion-group-10wg', resourceInUseByAnotherResource
Sep 19 11:25:58.224: INFO: Couldn't delete PD "e2e-6b50171459-abe28-8af32887-44e6-4959-a967-bcf18eecaffd", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-serial-1-5/zones/us-west1-b/disks/e2e-6b50171459-abe28-8af32887-44e6-4959-a967-bcf18eecaffd' is already being used by 'projects/k8s-gce-serial-1-5/zones/us-west1-b/instances/e2e-6b50171459-abe28-minion-group-10wg', resourceInUseByAnotherResource
Sep 19 11:26:05.612: INFO: Successfully deleted PD "e2e-6b50171459-abe28-8af32887-44e6-4959-a967-bcf18eecaffd".
Sep 19 11:26:05.612: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/framework/framework.go:152
Sep 19 11:26:05.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4489" for this suite.
Sep 19 11:26:11.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 19 11:26:12.174: INFO: discovery error for unexpected group: schema.GroupVersion{Group:"kubectl.example.com", Version:"v1"}
Sep 19 11:26:12.174: INFO: Error discoverying server preferred namespaced resources: unable to retrieve the complete list of server APIs: kubectl.example.com/v1: the server could not find the requested resource, retrying in 2s.
Sep 19 11:26:15.700: INFO: namespace volume-4489 deletion completed in 10.045557631s


• [SLOW TEST:42.903 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
... skipping 732 lines ...
Sep 19 11:25:38.266: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-p55h7] to have phase Bound
Sep 19 11:25:38.311: INFO: PersistentVolumeClaim pvc-p55h7 found but phase is Pending instead of Bound.
Sep 19 11:25:40.365: INFO: PersistentVolumeClaim pvc-p55h7 found and phase=Bound (2.098032401s)
Sep 19 11:25:40.365: INFO: Waiting up to 3m0s for PersistentVolume gce-5dl6t to have phase Bound
Sep 19 11:25:40.405: INFO: PersistentVolume gce-5dl6t found and phase=Bound (40.002812ms)
STEP: Creating the Client Pod
[It] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
  test/e2e/storage/persistent_volumes-gce.go:124
STEP: Deleting the Claim
Sep 19 11:26:02.731: INFO: Deleting PersistentVolumeClaim "pvc-p55h7"
STEP: Deleting the Pod
Sep 19 11:26:02.984: INFO: Deleting pod "pvc-tester-tljvt" in namespace "pv-1816"
Sep 19 11:26:03.027: INFO: Wait up to 5m0s for pod "pvc-tester-tljvt" to be fully deleted
... skipping 15 lines ...
Sep 19 11:26:31.623: INFO: Successfully deleted PD "e2e-6b50171459-abe28-c28d2e28-ef49-4348-8f67-d6a217ad0368".


• [SLOW TEST:57.026 seconds]
[sig-storage] PersistentVolumes GCEPD
test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
  test/e2e/storage/persistent_volumes-gce.go:124
------------------------------
S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  test/e2e/storage/testsuites/base.go:93
... skipping 785 lines ...
STEP: Proxying to Pod through the API server
[AfterEach] [sig-instrumentation] MetricsGrabber
  test/e2e/framework/framework.go:152
Sep 19 11:26:36.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-5989" for this suite.
Sep 19 11:26:44.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 19 11:26:45.226: INFO: discovery error for unexpected group: schema.GroupVersion{Group:"kubectl.example.com", Version:"v1"}
Sep 19 11:26:45.226: INFO: Error discoverying server preferred namespaced resources: unable to retrieve the complete list of server APIs: kubectl.example.com/v1: the server could not find the requested resource, retrying in 2s.
Sep 19 11:26:48.978: INFO: namespace metrics-grabber-5989 deletion completed in 12.402056332s


• [SLOW TEST:12.941 seconds]
[sig-instrumentation] MetricsGrabber
test/e2e/instrumentation/common/framework.go:23
... skipping 1148 lines ...
Sep 19 11:26:39.650: INFO: stdout: ""
[AfterEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:152
Sep 19 11:26:39.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "persistent-local-volumes-test-9715" for this suite.
Sep 19 11:26:53.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 19 11:26:54.062: INFO: discovery error for unexpected group: schema.GroupVersion{Group:"crd-publish-openapi-test-unknown-at-root.example.com", Version:"v1"}
Sep 19 11:26:54.062: INFO: Error discoverying server preferred namespaced resources: unable to retrieve the complete list of server APIs: crd-publish-openapi-test-unknown-at-root.example.com/v1: the server could not find the requested resource, retrying in 2s.
Sep 19 11:26:57.766: INFO: namespace persistent-local-volumes-test-9715 deletion completed in 18.024597219s


• [SLOW TEST:56.852 seconds]
[sig-storage] PersistentVolumes-local 
test/e2e/storage/utils/framework.go:23
... skipping 267 lines ...
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 19 11:26:52.107: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-7208
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:698
STEP: Creating configMap that has name configmap-test-emptyKey-27175869-3367-422c-8d7c-344fe277822a
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:152
Sep 19 11:26:52.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7208" for this suite.
Sep 19 11:26:58.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 19 11:27:00.326: INFO: namespace configmap-7208 deletion completed in 7.684546633s


• [SLOW TEST:8.270 seconds]
[sig-node] ConfigMap
test/e2e/common/configmap.go:32
  should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:93
Sep 19 11:27:00.333: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 2620 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/common/sysctl.go:63
[It] should not launch unsafe, but not explicitly enabled sysctls on the node
  test/e2e/common/sysctl.go:188
STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node
STEP: Watching for error events or started pod
STEP: Checking that the pod was rejected
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/framework/framework.go:152
Sep 19 11:27:32.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-7012" for this suite.
Sep 19 11:27:39.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 1530 lines ...
Sep 19 11:27:36.715: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.199.155.26 --kubeconfig=/workspace/.kube/config exec gcepd-client --namespace=volume-7210 -- grep  /opt/0  /proc/mounts'
Sep 19 11:27:37.605: INFO: stderr: ""
Sep 19 11:27:37.605: INFO: stdout: "/dev/sdb /opt/0 ext3 rw,relatime,data=ordered 0 0\n"
STEP: cleaning the environment after gcepd
Sep 19 11:27:37.605: INFO: Deleting pod "gcepd-client" in namespace "volume-7210"
Sep 19 11:27:37.647: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
Sep 19 11:27:47.399: INFO: error deleting PD "e2e-6b50171459-abe28-e8b7e452-1741-4f21-8ee3-6e17db3f4086": googleapi: Error 400: The disk resource 'projects/k8s-gce-serial-1-5/zones/us-west1-b/disks/e2e-6b50171459-abe28-e8b7e452-1741-4f21-8ee3-6e17db3f4086' is already being used by 'projects/k8s-gce-serial-1-5/zones/us-west1-b/instances/e2e-6b50171459-abe28-minion-group-vsm4', resourceInUseByAnotherResource
Sep 19 11:27:47.399: INFO: Couldn't delete PD "e2e-6b50171459-abe28-e8b7e452-1741-4f21-8ee3-6e17db3f4086", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-serial-1-5/zones/us-west1-b/disks/e2e-6b50171459-abe28-e8b7e452-1741-4f21-8ee3-6e17db3f4086' is already being used by 'projects/k8s-gce-serial-1-5/zones/us-west1-b/instances/e2e-6b50171459-abe28-minion-group-vsm4', resourceInUseByAnotherResource
Sep 19 11:27:55.126: INFO: Successfully deleted PD "e2e-6b50171459-abe28-e8b7e452-1741-4f21-8ee3-6e17db3f4086".
Sep 19 11:27:55.126: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/framework/framework.go:152
Sep 19 11:27:55.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-7210" for this suite.
... skipping 451 lines ...
Sep 19 11:27:53.677: INFO: Waiting for PV local-pvh9r26 to bind to PVC pvc-cbvrw
Sep 19 11:27:53.677: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-cbvrw] to have phase Bound
Sep 19 11:27:53.723: INFO: PersistentVolumeClaim pvc-cbvrw found but phase is Pending instead of Bound.
Sep 19 11:27:55.826: INFO: PersistentVolumeClaim pvc-cbvrw found and phase=Bound (2.149002022s)
Sep 19 11:27:55.827: INFO: Waiting up to 3m0s for PersistentVolume local-pvh9r26 to have phase Bound
Sep 19 11:27:55.915: INFO: PersistentVolume local-pvh9r26 found and phase=Bound (88.898233ms)
[It] should fail scheduling due to different NodeAffinity
  test/e2e/storage/persistent_volumes-local.go:365
STEP: local-volume-type: dir
STEP: Initializing test volumes
Sep 19 11:27:55.985: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.199.155.26 --kubeconfig=/workspace/.kube/config exec --namespace=persistent-local-volumes-test-8744 hostexec-e2e-6b50171459-abe28-minion-group-10wg -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-7f06d46d-ae15-4b76-a52a-605d7c5e14b5'
Sep 19 11:27:58.874: INFO: stderr: ""
Sep 19 11:27:58.874: INFO: stdout: ""
... skipping 26 lines ...

• [SLOW TEST:31.980 seconds]
[sig-storage] PersistentVolumes-local 
test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  test/e2e/storage/persistent_volumes-local.go:343
    should fail scheduling due to different NodeAffinity
    test/e2e/storage/persistent_volumes-local.go:365
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/storage/testsuites/base.go:93
Sep 19 11:28:17.823: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
... skipping 41 lines ...
Sep 19 11:27:49.814: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  test/e2e/framework/framework.go:698
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:152
Sep 19 11:28:03.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9687" for this suite.
Sep 19 11:28:09.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 727 lines ...
Sep 19 11:28:17.806: INFO: Trying to get logs from node e2e-6b50171459-abe28-minion-group-vsm4 pod exec-volume-test-gcepd-sp6h container exec-container-gcepd-sp6h: <nil>
STEP: delete the pod
Sep 19 11:28:17.905: INFO: Waiting for pod exec-volume-test-gcepd-sp6h to disappear
Sep 19 11:28:17.947: INFO: Pod exec-volume-test-gcepd-sp6h no longer exists
STEP: Deleting pod exec-volume-test-gcepd-sp6h
Sep 19 11:28:17.947: INFO: Deleting pod "exec-volume-test-gcepd-sp6h" in namespace "volume-8732"
Sep 19 11:28:19.078: INFO: error deleting PD "e2e-6b50171459-abe28-c3eef3a5-4c2a-45ea-ac8f-a02a56ea5739": googleapi: Error 400: The disk resource 'projects/k8s-gce-serial-1-5/zones/us-west1-b/disks/e2e-6b50171459-abe28-c3eef3a5-4c2a-45ea-ac8f-a02a56ea5739' is already being used by 'projects/k8s-gce-serial-1-5/zones/us-west1-b/instances/e2e-6b50171459-abe28-minion-group-vsm4', resourceInUseByAnotherResource
Sep 19 11:28:19.078: INFO: Couldn't delete PD "e2e-6b50171459-abe28-c3eef3a5-4c2a-45ea-ac8f-a02a56ea5739", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-serial-1-5/zones/us-west1-b/disks/e2e-6b50171459-abe28-c3eef3a5-4c2a-45ea-ac8f-a02a56ea5739' is already being used by 'projects/k8s-gce-serial-1-5/zones/us-west1-b/instances/e2e-6b50171459-abe28-minion-group-vsm4', resourceInUseByAnotherResource
Sep 19 11:28:25.726: INFO: error deleting PD "e2e-6b50171459-abe28-c3eef3a5-4c2a-45ea-ac8f-a02a56ea5739": googleapi: Error 400: The disk resource 'projects/k8s-gce-serial-1-5/zones/us-west1-b/disks/e2e-6b50171459-abe28-c3eef3a5-4c2a-45ea-ac8f-a02a56ea5739' is already being used by 'projects/k8s-gce-serial-1-5/zones/us-west1-b/instances/e2e-6b50171459-abe28-minion-group-vsm4', resourceInUseByAnotherResource
Sep 19 11:28:25.726: INFO: Couldn't delete PD "e2e-6b50171459-abe28-c3eef3a5-4c2a-45ea-ac8f-a02a56ea5739", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-serial-1-5/zones/us-west1-b/disks/e2e-6b50171459-abe28-c3eef3a5-4c2a-45ea-ac8f-a02a56ea5739' is already being used by 'projects/k8s-gce-serial-1-5/zones/us-west1-b/instances/e2e-6b50171459-abe28-minion-group-vsm4', resourceInUseByAnotherResource
Sep 19 11:28:33.574: INFO: Successfully deleted PD "e2e-6b50171459-abe28-c3eef3a5-4c2a-45ea-ac8f-a02a56ea5739".
Sep 19 11:28:33.574: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/framework/framework.go:152
Sep 19 11:28:33.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8732" for this suite.
... skipping 344 lines ...
      test/e2e/storage/testsuites/volume_expand.go:218

      Driver hostPath doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:146
------------------------------
S{"component":"entrypoint","file":"prow/entrypoint/run.go:163","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","time":"2019-09-19T11:28:51Z"}
Traceback (most recent call last):
  File "../test-infra/scenarios/kubernetes_e2e.py", line 778, in <module>
    main(parse_args())
  File "../test-infra/scenarios/kubernetes_e2e.py", line 626, in main
    mode.start(runner_args)
  File "../test-infra/scenarios/kubernetes_e2e.py", line 262, in start
... skipping 40 lines ...