This job view page is being replaced by Spyglass soon. Check out the new job view.
PRapelisse: Refactor FieldManager tests to make them simpler
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2019-09-11 20:12
Elapsed28m8s
Revisionc86ee8c712b505feb5fddaa6115784bb05765287
Refs 82554

No Test Failures!


Error lines from build-log.txt

... skipping 143 lines ...
INFO: 5212 processes: 5133 remote cache hit, 29 processwrapper-sandbox, 50 remote.
INFO: Build completed successfully, 5305 total actions
INFO: Build completed successfully, 5305 total actions
make: Leaving directory '/home/prow/go/src/k8s.io/kubernetes'
2019/09/11 20:21:00 process.go:155: Step 'make -C /home/prow/go/src/k8s.io/kubernetes bazel-release' finished in 8m29.280395172s
2019/09/11 20:21:00 util.go:255: Flushing memory.
2019/09/11 20:21:02 util.go:265: flushMem error (page cache): exit status 1
2019/09/11 20:21:02 process.go:153: Running: /home/prow/go/src/k8s.io/release/push-build.sh --nomock --verbose --noupdatelatest --bucket=kubernetes-release-pull --ci --gcs-suffix=/pull-kubernetes-e2e-gce --allow-dup
push-build.sh: BEGIN main on 1904c47e-d4d0-11e9-a582-8a06e185f399 Wed Sep 11 20:21:02 UTC 2019

$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
INFO: Invocation ID: ce4af6cf-fd74-496b-a160-ff5dc8670921
Loading: 
... skipping 849 lines ...
Trying to find master named 'e2e-8b7803f865-abe28-master'
Looking for address 'e2e-8b7803f865-abe28-master-ip'
Using master: e2e-8b7803f865-abe28-master (external IP: 35.230.40.69; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

...........Kubernetes cluster created.
Cluster "k8s-boskos-gce-project-13_e2e-8b7803f865-abe28" set.
User "k8s-boskos-gce-project-13_e2e-8b7803f865-abe28" set.
Context "k8s-boskos-gce-project-13_e2e-8b7803f865-abe28" created.
Switched to context "k8s-boskos-gce-project-13_e2e-8b7803f865-abe28".
... skipping 119 lines ...

Sep 11 20:33:22.958: INFO: cluster-master-image: cos-73-11647-163-0
Sep 11 20:33:22.958: INFO: cluster-node-image: cos-73-11647-163-0
Sep 11 20:33:22.958: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 11 20:33:22.962: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Sep 11 20:33:23.152: INFO: Waiting up to 10m0s for all pods (need at least 8) in namespace 'kube-system' to be running and ready
Sep 11 20:33:23.326: INFO: The status of Pod l7-lb-controller-v1.2.3-e2e-8b7803f865-abe28-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 11 20:33:23.327: INFO: 27 / 28 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Sep 11 20:33:23.327: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
Sep 11 20:33:23.327: INFO: POD                                                  NODE                         PHASE    GRACE  CONDITIONS
Sep 11 20:33:23.327: INFO: l7-lb-controller-v1.2.3-e2e-8b7803f865-abe28-master  e2e-8b7803f865-abe28-master  Pending         []
Sep 11 20:33:23.327: INFO: 
Sep 11 20:33:25.450: INFO: The status of Pod l7-lb-controller-v1.2.3-e2e-8b7803f865-abe28-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 11 20:33:25.450: INFO: 27 / 28 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
Sep 11 20:33:25.450: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
Sep 11 20:33:25.450: INFO: POD                                                  NODE                         PHASE    GRACE  CONDITIONS
Sep 11 20:33:25.450: INFO: l7-lb-controller-v1.2.3-e2e-8b7803f865-abe28-master  e2e-8b7803f865-abe28-master  Pending         []
Sep 11 20:33:25.450: INFO: 
Sep 11 20:33:27.448: INFO: 28 / 28 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
... skipping 809 lines ...
Sep 11 20:33:35.862: INFO: AfterEach: Cleaning up test resources


S [SKIPPING] in Spec Setup (BeforeEach) [8.108 seconds]
[sig-storage] PersistentVolumes:vsphere
test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on vspehre volume detach [BeforeEach]
  test/e2e/storage/vsphere/persistent_volumes-vsphere.go:166

  Only supported for providers [vsphere] (not gce)

  test/e2e/storage/vsphere/persistent_volumes-vsphere.go:62
------------------------------
... skipping 5093 lines ...
STEP: Scaling down replication controller to zero
STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-7534 to 0
STEP: Update service to not tolerate unready services
STEP: Check if pod is unreachable
Sep 11 20:35:36.884: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.40.69 --kubeconfig=/workspace/.kube/config exec --namespace=services-7534 execpod-w7rsj -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7534.svc.cluster.local:80/; test "$?" -ne "0"'
Sep 11 20:35:39.349: INFO: rc: 1
Sep 11 20:35:39.350: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.230.40.69 --kubeconfig=/workspace/.kube/config exec --namespace=services-7534 execpod-w7rsj -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7534.svc.cluster.local:80/; test "$?" -ne "0"] []  <nil> NOW: 2019-09-11 20:35:39.169069776 +0000 UTC m=+29.431263386 + curl -q -s --connect-timeout 2 http://tolerate-unready.services-7534.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1
 [] <nil> 0xc002230b40 exit status 1 <nil> <nil> true [0xc0024fd578 0xc0024fd590 0xc0024fd5a8] [0xc0024fd578 0xc0024fd590 0xc0024fd5a8] [0xc0024fd588 0xc0024fd5a0] [0x10ef820 0x10ef820] 0xc00233a8a0 <nil>}:
Command stdout:
NOW: 2019-09-11 20:35:39.169069776 +0000 UTC m=+29.431263386
stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7534.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1

error:
exit status 1
Sep 11 20:35:41.350: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.40.69 --kubeconfig=/workspace/.kube/config exec --namespace=services-7534 execpod-w7rsj -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7534.svc.cluster.local:80/; test "$?" -ne "0"'
Sep 11 20:35:44.247: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7534.svc.cluster.local:80/\n+ test 7 -ne 0\n"
Sep 11 20:35:44.247: INFO: stdout: ""
STEP: Update service to tolerate unready services again
STEP: Check if terminating pod is available through service
Sep 11 20:35:44.341: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.40.69 --kubeconfig=/workspace/.kube/config exec --namespace=services-7534 execpod-w7rsj -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7534.svc.cluster.local:80/'
Sep 11 20:35:46.425: INFO: rc: 7
Sep 11 20:35:46.425: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running &{/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.230.40.69 --kubeconfig=/workspace/.kube/config exec --namespace=services-7534 execpod-w7rsj -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7534.svc.cluster.local:80/] []  <nil>  + curl -q -s --connect-timeout 2 http://tolerate-unready.services-7534.svc.cluster.local:80/
command terminated with exit code 7
 [] <nil> 0xc002160b10 exit status 7 <nil> <nil> true [0xc001be4888 0xc001be48b8 0xc001be48d8] [0xc001be4888 0xc001be48b8 0xc001be48d8] [0xc001be48a8 0xc001be48c8] [0x10ef820 0x10ef820] 0xc0024f9b00 <nil>}:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7534.svc.cluster.local:80/
command terminated with exit code 7

error:
exit status 7
Sep 11 20:35:48.426: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.40.69 --kubeconfig=/workspace/.kube/config exec --namespace=services-7534 execpod-w7rsj -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7534.svc.cluster.local:80/'
Sep 11 20:35:49.888: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7534.svc.cluster.local:80/\n"
Sep 11 20:35:49.888: INFO: stdout: "NOW: 2019-09-11 20:35:49.642735932 +0000 UTC m=+39.904929538"
STEP: Remove pods immediately
STEP: stopping RC slow-terminating-unready-pod in namespace services-7534
... skipping 1842 lines ...
Sep 11 20:36:20.006: INFO: AfterEach: Cleaning up test resources


S [SKIPPING] in Spec Setup (BeforeEach) [10.268 seconds]
[sig-storage] PersistentVolumes:vsphere
test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on vsphere volume detach [BeforeEach]
  test/e2e/storage/vsphere/persistent_volumes-vsphere.go:150

  Only supported for providers [vsphere] (not gce)

  test/e2e/storage/vsphere/persistent_volumes-vsphere.go:62
------------------------------
... skipping 1967 lines ...
Sep 11 20:36:30.785: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename volume-provisioning
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-provisioning-321
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Dynamic Provisioning
  test/e2e/storage/volume_provisioning.go:258
[It] should report an error and create no PV
  test/e2e/storage/volume_provisioning.go:900
Sep 11 20:36:31.162: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [sig-storage] Dynamic Provisioning
  test/e2e/framework/framework.go:152
Sep 11 20:36:31.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-provisioning-321" for this suite.
... skipping 3 lines ...

S [SKIPPING] [9.953 seconds]
[sig-storage] Dynamic Provisioning
test/e2e/storage/utils/framework.go:23
  Invalid AWS KMS key
  test/e2e/storage/volume_provisioning.go:899
    should report an error and create no PV [It]
    test/e2e/storage/volume_provisioning.go:900

    Only supported for providers [aws] (not gce)

    test/e2e/storage/volume_provisioning.go:901
------------------------------
... skipping 2499 lines ...
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 11 20:36:50.331: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-8639
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  test/e2e/apps/job.go:110
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:152
Sep 11 20:37:11.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
Sep 11 20:37:18.643: INFO: namespace job-8639 deletion completed in 7.541920284s


• [SLOW TEST:28.313 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  test/e2e/apps/job.go:110
------------------------------
S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:92
... skipping 600 lines ...
Sep 11 20:37:18.401: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep 11 20:37:18.401: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.40.69 --kubeconfig=/workspace/.kube/config describe pod redis-master-9bqv7 --namespace=kubectl-6762'
Sep 11 20:37:18.774: INFO: stderr: ""
Sep 11 20:37:18.774: INFO: stdout: "Name:         redis-master-9bqv7\nNamespace:    kubectl-6762\nPriority:     0\nNode:         e2e-8b7803f865-abe28-minion-group-pq9g/10.40.0.5\nStart Time:   Wed, 11 Sep 2019 20:37:13 +0000\nLabels:       app=redis\n              role=master\nAnnotations:  kubernetes.io/psp: e2e-test-privileged-psp\nStatus:       Running\nIP:           10.64.0.82\nIPs:\n  IP:           10.64.0.82\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://56dac5d7c12a1e07adadf0b2dbaefb6774a5b4f8344acf45c14e7342bc9b7e45\n    Image:          docker.io/library/redis:5.0.5-alpine\n    Image ID:       docker-pullable://redis@sha256:a606eaca41c3c69c7d2c8a142ec445e71156bae8526ae7970f62b6399e57761c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 11 Sep 2019 20:37:17 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-p2dcc (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-p2dcc:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-p2dcc\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                                             Message\n  ----    ------     ----       ----                                             -------\n  Normal  Scheduled  <unknown>  default-scheduler                                Successfully assigned kubectl-6762/redis-master-9bqv7 to e2e-8b7803f865-abe28-minion-group-pq9g\n  Normal  Pulled     3s         kubelet, e2e-8b7803f865-abe28-minion-group-pq9g  Container image \"docker.io/library/redis:5.0.5-alpine\" already present on machine\n  Normal  Created    2s         kubelet, e2e-8b7803f865-abe28-minion-group-pq9g  Created container redis-master\n  Normal  Started    1s         kubelet, e2e-8b7803f865-abe28-minion-group-pq9g  Started container redis-master\n"
Sep 11 20:37:18.775: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.40.69 --kubeconfig=/workspace/.kube/config describe rc redis-master --namespace=kubectl-6762'
Sep 11 20:37:19.138: INFO: stderr: ""
Sep 11 20:37:19.139: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-6762\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        docker.io/library/redis:5.0.5-alpine\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  6s    replication-controller  Created pod: redis-master-9bqv7\n"
Sep 11 20:37:19.139: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.40.69 --kubeconfig=/workspace/.kube/config describe service redis-master --namespace=kubectl-6762'
Sep 11 20:37:19.533: INFO: stderr: ""
Sep 11 20:37:19.533: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-6762\nLabels:            app=redis\n                   role=master\nAnnotations:       <none>\nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.0.217.248\nPort:              <unset>  6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.64.0.82:6379\nSession Affinity:  None\nEvents:            <none>\n"
Sep 11 20:37:19.569: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.230.40.69 --kubeconfig=/workspace/.kube/config describe node e2e-8b7803f865-abe28-master'
Sep 11 20:37:19.999: INFO: stderr: ""
Sep 11 20:37:19.999: INFO: stdout: "Name:               e2e-8b7803f865-abe28-master\nRoles:              <none>\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=n1-standard-1\n                    beta.kubernetes.io/os=linux\n                    cloud.google.com/metadata-proxy-ready=true\n                    failure-domain.beta.kubernetes.io/region=us-west1\n                    failure-domain.beta.kubernetes.io/zone=us-west1-b\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=e2e-8b7803f865-abe28-master\n                    kubernetes.io/os=linux\nAnnotations:        node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Wed, 11 Sep 2019 20:32:22 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\n                    node.kubernetes.io/unschedulable:NoSchedule\nUnschedulable:      true\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Wed, 11 Sep 2019 20:32:47 +0000   Wed, 11 Sep 2019 20:32:47 +0000   RouteCreated                 RouteController created a route\n  MemoryPressure       False   Wed, 11 Sep 2019 20:37:03 +0000   Wed, 11 Sep 2019 20:32:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Wed, 11 Sep 2019 20:37:03 +0000   Wed, 11 Sep 2019 20:32:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Wed, 11 Sep 2019 20:37:03 +0000   Wed, 11 Sep 2019 20:32:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Wed, 11 Sep 2019 20:37:03 +0000   Wed, 11 Sep 2019 20:32:23 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   10.40.0.2\n  ExternalIP:   35.230.40.69\n  InternalDNS:  e2e-8b7803f865-abe28-master.c.k8s-boskos-gce-project-13.internal\n  Hostname:     e2e-8b7803f865-abe28-master.c.k8s-boskos-gce-project-13.internal\nCapacity:\n attachable-volumes-gce-pd:  127\n cpu:                        1\n ephemeral-storage:          16293736Ki\n hugepages-2Mi:              0\n memory:                     3787520Ki\n pods:                       110\nAllocatable:\n attachable-volumes-gce-pd:  127\n cpu:                        1\n ephemeral-storage:          15016307073\n hugepages-2Mi:              0\n memory:                     3531520Ki\n pods:                       110\nSystem Info:\n Machine ID:                 831926bdb449dfc83d601970992c1722\n System UUID:                831926BD-B449-DFC8-3D60-1970992C1722\n Boot ID:                    0db9444d-345d-47b1-88c3-a02e8846641c\n Kernel Version:             4.14.94+\n OS Image:                   Container-Optimized OS from Google\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.3\n Kubelet Version:            v1.17.0-alpha.0.1269+7476e340332f6d\n Kube-Proxy Version:         v1.17.0-alpha.0.1269+7476e340332f6d\nPodCIDR:                     10.64.3.0/24\nPodCIDRs:                    10.64.3.0/24\nProviderID:                  gce://k8s-boskos-gce-project-13/us-west1-b/e2e-8b7803f865-abe28-master\nNon-terminated Pods:         (10 in total)\n  Namespace                  Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                                   ------------  ----------  ---------------  -------------  ---\n  kube-system                etcd-empty-dir-cleanup-e2e-8b7803f865-abe28-master     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s\n  kube-system                etcd-server-e2e-8b7803f865-abe28-master                200m (20%)    0 (0%)      0 (0%)           0 (0%)         4m20s\n  kube-system                etcd-server-events-e2e-8b7803f865-abe28-master         100m (10%)    0 (0%)      0 (0%)           0 (0%)         4m32s\n  kube-system                fluentd-gcp-v3.2.0-xwf2w                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s\n  kube-system                kube-addon-manager-e2e-8b7803f865-abe28-master         5m (0%)       0 (0%)      50Mi (1%)        0 (0%)         4m7s\n  kube-system                kube-apiserver-e2e-8b7803f865-abe28-master             250m (25%)    0 (0%)      0 (0%)           0 (0%)         4m41s\n  kube-system                kube-controller-manager-e2e-8b7803f865-abe28-master    200m (20%)    0 (0%)      0 (0%)           0 (0%)         4m27s\n  kube-system                kube-scheduler-e2e-8b7803f865-abe28-master             75m (7%)      0 (0%)      0 (0%)           0 (0%)         4m24s\n  kube-system                l7-lb-controller-v1.2.3-e2e-8b7803f865-abe28-master    10m (1%)      0 (0%)      50Mi (1%)        0 (0%)         3m57s\n  kube-system                metadata-proxy-v0.1-vrw22                              32m (3%)      32m (3%)    45Mi (1%)        45Mi (1%)      4m57s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                   Requests    Limits\n  --------                   --------    ------\n  cpu                        872m (87%)  32m (3%)\n  memory                     145Mi (4%)  45Mi (1%)\n  ephemeral-storage          0 (0%)      0 (0%)\n  attachable-volumes-gce-pd  0           0\nEvents:                      <none>\n"
... skipping 725 lines ...
Sep 11 20:37:26.463: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2e9905a8-940e-4727-a85e-d68d316cf142"
Sep 11 20:37:26.463: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2e9905a8-940e-4727-a85e-d68d316cf142" in namespace "pods-9029" to be "terminated due to deadline exceeded"
Sep 11 20:37:26.500: INFO: Pod "pod-update-activedeadlineseconds-2e9905a8-940e-4727-a85e-d68d316cf142": Phase="Running", Reason="", readiness=true. Elapsed: 37.354435ms
Sep 11 20:37:28.535: INFO: Pod "pod-update-activedeadlineseconds-2e9905a8-940e-4727-a85e-d68d316cf142": Phase="Running", Reason="", readiness=true. Elapsed: 2.072608412s
Sep 11 20:37:30.571: INFO: Pod "pod-update-activedeadlineseconds-2e9905a8-940e-4727-a85e-d68d316cf142": Phase="Running", Reason="", readiness=true. Elapsed: 4.107848847s
Sep 11 20:37:32.628: INFO: Pod "pod-update-activedeadlineseconds-2e9905a8-940e-4727-a85e-d68d316cf142": Phase="Running", Reason="", readiness=true. Elapsed: 6.164792211s
Sep 11 20:37:34.663: INFO: Pod "pod-update-activedeadlineseconds-2e9905a8-940e-4727-a85e-d68d316cf142": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 8.200253239s
Sep 11 20:37:34.663: INFO: Pod "pod-update-activedeadlineseconds-2e9905a8-940e-4727-a85e-d68d316cf142" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:152
Sep 11 20:37:34.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9029" for this suite.
Sep 11 20:37:42.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 1680 lines ...
Sep 11 20:37:30.419: INFO: Node name not specified for getVolumeOpCounts, falling back to listing nodes from API Server
Sep 11 20:37:31.684: INFO: Creating resource for dynamic PV
STEP: creating a StorageClass volume-expand-9258-gcepd-scn2frb
STEP: creating a claim
STEP: Expanding non-expandable pvc
Sep 11 20:37:31.797: INFO: currentPvcSize {{5368709120 0} {<nil>} 5Gi BinarySI}, newSize {{6442450944 0} {<nil>}  BinarySI}
Sep 11 20:37:31.868: INFO: Error updating pvc gcepdd2bbc with PersistentVolumeClaim "gcepdd2bbc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 11 20:37:33.941: INFO: Error updating pvc gcepdd2bbc with PersistentVolumeClaim "gcepdd2bbc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 11 20:37:35.948: INFO: Error updating pvc gcepdd2bbc with PersistentVolumeClaim "gcepdd2bbc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 11 20:37:37.954: INFO: Error updating pvc gcepdd2bbc with PersistentVolumeClaim "gcepdd2bbc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 11 20:37:39.975: INFO: Error updating pvc gcepdd2bbc with PersistentVolumeClaim "gcepdd2bbc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 11 20:37:41.963: INFO: Error updating pvc gcepdd2bbc with PersistentVolumeClaim "gcepdd2bbc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 11 20:37:43.961: INFO: Error updating pvc gcepdd2bbc with PersistentVolumeClaim "gcepdd2bbc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 11 20:37:45.946: INFO: Error updating pvc gcepdd2bbc with PersistentVolumeClaim "gcepdd2bbc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 11 20:37:47.946: INFO: Error updating pvc gcepdd2bbc with PersistentVolumeClaim "gcepdd2bbc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 11 20:37:49.941: INFO: Error updating pvc gcepdd2bbc with PersistentVolumeClaim "gcepdd2bbc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 11 20:37:51.942: INFO: Error updating pvc gcepdd2bbc with PersistentVolumeClaim "gcepdd2bbc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 11 20:37:53.944: INFO: Error updating pvc gcepdd2bbc with PersistentVolumeClaim "gcepdd2bbc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 11 20:37:55.944: INFO: Error updating pvc gcepdd2bbc with PersistentVolumeClaim "gcepdd2bbc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 11 20:37:57.945: INFO: Error updating pvc gcepdd2bbc with PersistentVolumeClaim "gcepdd2bbc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 11 20:37:59.949: INFO: Error updating pvc gcepdd2bbc with PersistentVolumeClaim "gcepdd2bbc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 11 20:38:01.950: INFO: Error updating pvc gcepdd2bbc with PersistentVolumeClaim "gcepdd2bbc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Sep 11 20:38:02.021: INFO: Error updating pvc gcepdd2bbc with PersistentVolumeClaim "gcepdd2bbc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
STEP: Deleting pvc
Sep 11 20:38:02.021: INFO: Deleting PersistentVolumeClaim "gcepdd2bbc"
STEP: Deleting sc
Sep 11 20:38:02.101: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  test/e2e/framework/framework.go:152
... skipping 2080 lines ...
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 11 20:38:17.109: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-3312
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:698
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:152
Sep 11 20:38:35.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
Sep 11 20:38:43.116: INFO: namespace job-3312 deletion completed in 7.510486927s


• [SLOW TEST:26.007 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:92
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:151
... skipping 174 lines ...
Sep 11 20:38:22.008: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5732.svc.cluster.local from pod dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8: the server could not find the requested resource (get pods dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8)
Sep 11 20:38:22.117: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5732.svc.cluster.local from pod dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8: the server could not find the requested resource (get pods dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8)
Sep 11 20:38:22.310: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5732.svc.cluster.local from pod dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8: the server could not find the requested resource (get pods dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8)
Sep 11 20:38:22.435: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5732.svc.cluster.local from pod dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8: the server could not find the requested resource (get pods dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8)
Sep 11 20:38:22.520: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5732.svc.cluster.local from pod dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8: the server could not find the requested resource (get pods dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8)
Sep 11 20:38:22.579: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5732.svc.cluster.local from pod dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8: the server could not find the requested resource (get pods dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8)
Sep 11 20:38:22.778: INFO: Lookups using dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5732.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5732.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5732.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5732.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5732.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5732.svc.cluster.local jessie_udp@dns-test-service-2.dns-5732.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5732.svc.cluster.local]

Sep 11 20:38:27.889: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5732.svc.cluster.local from pod dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8: the server could not find the requested resource (get pods dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8)
Sep 11 20:38:27.976: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5732.svc.cluster.local from pod dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8: the server could not find the requested resource (get pods dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8)
Sep 11 20:38:28.085: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5732.svc.cluster.local from pod dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8: the server could not find the requested resource (get pods dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8)
Sep 11 20:38:28.251: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5732.svc.cluster.local from pod dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8: the server could not find the requested resource (get pods dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8)
Sep 11 20:38:29.071: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5732.svc.cluster.local from pod dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8: the server could not find the requested resource (get pods dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8)
Sep 11 20:38:29.281: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5732.svc.cluster.local from pod dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8: the server could not find the requested resource (get pods dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8)
Sep 11 20:38:29.442: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5732.svc.cluster.local from pod dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8: the server could not find the requested resource (get pods dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8)
Sep 11 20:38:29.584: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5732.svc.cluster.local from pod dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8: the server could not find the requested resource (get pods dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8)
Sep 11 20:38:29.989: INFO: Lookups using dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5732.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5732.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5732.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5732.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5732.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5732.svc.cluster.local jessie_udp@dns-test-service-2.dns-5732.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5732.svc.cluster.local]

Sep 11 20:38:32.844: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5732.svc.cluster.local from pod dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8: the server could not find the requested resource (get pods dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8)
Sep 11 20:38:32.918: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5732.svc.cluster.local from pod dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8: the server could not find the requested resource (get pods dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8)
Sep 11 20:38:32.975: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5732.svc.cluster.local from pod dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8: the server could not find the requested resource (get pods dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8)
Sep 11 20:38:33.035: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5732.svc.cluster.local from pod dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8: the server could not find the requested resource (get pods dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8)
Sep 11 20:38:33.215: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5732.svc.cluster.local from pod dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8: the server could not find the requested resource (get pods dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8)
Sep 11 20:38:33.261: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5732.svc.cluster.local from pod dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8: the server could not find the requested resource (get pods dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8)
Sep 11 20:38:33.319: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5732.svc.cluster.local from pod dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8: the server could not find the requested resource (get pods dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8)
Sep 11 20:38:33.410: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5732.svc.cluster.local from pod dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8: the server could not find the requested resource (get pods dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8)
Sep 11 20:38:33.543: INFO: Lookups using dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5732.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5732.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5732.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5732.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5732.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5732.svc.cluster.local jessie_udp@dns-test-service-2.dns-5732.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5732.svc.cluster.local]

Sep 11 20:38:38.380: INFO: DNS probes using dns-5732/dns-test-3122c10e-0604-4464-b553-7cffe96c0ab8 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 1320 lines ...
STEP: Wait for the deployment to be ready
Sep 11 20:38:43.581: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63703831123, loc:(*time.Location)(0x84742c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703831123, loc:(*time.Location)(0x84742c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63703831123, loc:(*time.Location)(0x84742c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703831123, loc:(*time.Location)(0x84742c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 11 20:38:45.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63703831123, loc:(*time.Location)(0x84742c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703831123, loc:(*time.Location)(0x84742c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63703831123, loc:(*time.Location)(0x84742c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703831123, loc:(*time.Location)(0x84742c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Sep 11 20:38:48.665: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:698
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:152
Sep 11 20:38:49.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2758" for this suite.
... skipping 6 lines ...
  test/e2e/apimachinery/webhook.go:103


• [SLOW TEST:30.070 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/testsuites/base.go:92
Sep 11 20:39:12.019: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
... skipping 549 lines ...
Sep 11 20:38:47.852: INFO: rc: 1
STEP: cleaning the environment after flex
Sep 11 20:38:47.852: INFO: Deleting pod "flex-client" in namespace "flexvolume-6569"
Sep 11 20:38:47.899: INFO: Wait up to 5m0s for pod "flex-client" to be fully deleted
STEP: waiting for flex client pod to terminate
Sep 11 20:38:58.087: INFO: Waiting up to 5m0s for pod "flex-client" in namespace "flexvolume-6569" to be "terminated due to deadline exceeded"
Sep 11 20:38:58.149: INFO: Pod "flex-client" in namespace "flexvolume-6569" not found. Error: pods "flex-client" not found
STEP: uninstalling flexvolume dummy-attachable-flexvolume-6569 from node e2e-8b7803f865-abe28-minion-group-pq9g
Sep 11 20:39:08.149: INFO: Getting external IP address for e2e-8b7803f865-abe28-minion-group-pq9g
Sep 11 20:39:08.656: INFO: ssh prow@34.83.236.246:22: command:   sudo rm -r /home/kubernetes/flexvolume/k8s~dummy-attachable-flexvolume-6569
Sep 11 20:39:08.656: INFO: ssh prow@34.83.236.246:22: stdout:    ""
Sep 11 20:39:08.656: INFO: ssh prow@34.83.236.246:22: stderr:    ""
Sep 11 20:39:08.656: INFO: ssh prow@34.83.236.246:22: exit code: 0
... skipping 248 lines ...
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 11 20:39:12.058: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-9259
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:698
STEP: Creating projection with secret that has name secret-emptykey-test-5a1461ed-d998-40e6-b755-0337b1492b53
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:152
Sep 11 20:39:13.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9259" for this suite.
Sep 11 20:39:21.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 11 20:39:22.864: INFO: namespace secrets-9259 deletion completed in 9.517893016s


• [SLOW TEST:10.807 seconds]
[sig-api-machinery] Secrets
test/e2e/common/secrets.go:32
  should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
SSS
------------------------------
[BeforeEach] [sig-node] Downward API
  test/e2e/framework/framework.go:151
... skipping 861 lines ...
Sep 11 20:38:34.074: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-5382
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:698
STEP: creating the pod
Sep 11 20:38:34.411: INFO: PodSpec: initContainers in spec.initContainers
Sep 11 20:39:29.133: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ae942fc6-aefb-4269-ae26-41c807158c40", GenerateName:"", Namespace:"init-container-5382", SelfLink:"/api/v1/namespaces/init-container-5382/pods/pod-init-ae942fc6-aefb-4269-ae26-41c807158c40", UID:"b8dc41c8-6dd8-42c4-98df-6b9a7a7830ba", ResourceVersion:"11555", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63703831114, loc:(*time.Location)(0x84742c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"411456019"}, Annotations:map[string]string{"kubernetes.io/psp":"e2e-test-privileged-psp"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-6vb4x", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00234e400), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6vb4x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6vb4x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6vb4x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0026fe098), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"e2e-8b7803f865-abe28-minion-group-g9zj", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001400120), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0026fe120)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0026fe150)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0026fe158), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0026fe15c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703831114, loc:(*time.Location)(0x84742c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703831114, loc:(*time.Location)(0x84742c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703831114, loc:(*time.Location)(0x84742c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703831114, loc:(*time.Location)(0x84742c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.40.0.4", PodIP:"10.64.2.122", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.64.2.122"}}, StartTime:(*v1.Time)(0xc002782160), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001526070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0015260e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"docker://db5091684284e6736014709b208db0b725214c64d7d0dafcb508d41f285d6760", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0027821a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002782180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0026fe1ff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:152
Sep 11 20:39:29.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5382" for this suite.
Sep 11 20:39:41.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 11 20:39:42.749: INFO: namespace init-container-5382 deletion completed in 13.572694864s


• [SLOW TEST:68.675 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:693
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 11 20:38:34.441: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 1786 lines ...
Sep 11 20:38:50.314: INFO: ssh prow@34.83.140.155:22: command:   sudo mkdir "/var/lib/kubelet/mount-propagation-8880"/host; sudo mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-8880"/host; echo host > "/var/lib/kubelet/mount-propagation-8880"/host/file
Sep 11 20:38:50.314: INFO: ssh prow@34.83.140.155:22: stdout:    ""
Sep 11 20:38:50.314: INFO: ssh prow@34.83.140.155:22: stderr:    ""
Sep 11 20:38:50.314: INFO: ssh prow@34.83.140.155:22: exit code: 0
Sep 11 20:38:50.355: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8880 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 11 20:38:50.355: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 11 20:38:52.973: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Sep 11 20:38:53.009: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8880 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 11 20:38:53.009: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 11 20:38:55.277: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Sep 11 20:38:55.319: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8880 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 11 20:38:55.319: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 11 20:38:57.538: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Sep 11 20:38:57.640: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8880 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 11 20:38:57.640: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 11 20:38:59.018: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Sep 11 20:38:59.061: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8880 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 11 20:38:59.061: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 11 20:39:00.481: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Sep 11 20:39:00.620: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8880 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 11 20:39:00.620: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 11 20:39:03.087: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Sep 11 20:39:03.126: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8880 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 11 20:39:03.126: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 11 20:39:04.500: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Sep 11 20:39:04.797: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8880 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 11 20:39:04.797: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 11 20:39:06.276: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Sep 11 20:39:06.347: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8880 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 11 20:39:06.347: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 11 20:39:07.134: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Sep 11 20:39:07.209: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8880 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 11 20:39:07.209: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 11 20:39:08.318: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Sep 11 20:39:08.482: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8880 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 11 20:39:08.482: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 11 20:39:10.166: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Sep 11 20:39:10.296: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8880 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 11 20:39:10.296: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 11 20:39:11.191: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Sep 11 20:39:11.248: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8880 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 11 20:39:11.248: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 11 20:39:11.911: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Sep 11 20:39:11.957: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8880 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 11 20:39:11.957: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 11 20:39:13.176: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Sep 11 20:39:13.303: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8880 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 11 20:39:13.303: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 11 20:39:14.081: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Sep 11 20:39:14.267: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8880 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 11 20:39:14.267: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 11 20:39:15.167: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Sep 11 20:39:15.205: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8880 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 11 20:39:15.206: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 11 20:39:16.176: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Sep 11 20:39:16.211: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8880 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 11 20:39:16.211: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 11 20:39:16.957: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Sep 11 20:39:16.991: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8880 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 11 20:39:16.992: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 11 20:39:18.023: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Sep 11 20:39:18.060: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8880 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 11 20:39:18.060: INFO: >>> kubeConfig: /workspace/.kube/config
Sep 11 20:39:19.226: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Sep 11 20:39:19.226: INFO: Getting external IP address for e2e-8b7803f865-abe28-minion-group-7mqn
Sep 11 20:39:19.226: INFO: SSH "test `cat \"/var/lib/kubelet/mount-propagation-8880\"/master/file` = master" on e2e-8b7803f865-abe28-minion-group-7mqn(34.83.140.155:22)
Sep 11 20:39:19.692: INFO: ssh prow@34.83.140.155:22: command:   test `cat "/var/lib/kubelet/mount-propagation-8880"/master/file` = master
Sep 11 20:39:19.692: INFO: ssh prow@34.83.140.155:22: stdout:    ""
Sep 11 20:39:19.692: INFO: ssh prow@34.83.140.155:22: stderr:    ""
Sep 11 20:39:19.692: INFO: ssh prow@34.83.140.155:22: exit code: 0
... skipping 955 lines ...
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 11 20:39:58.440: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-3694
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to exceed backoffLimit
  test/e2e/apps/job.go:226
STEP: Creating a job
STEP: Ensuring job exceed backofflimit
STEP: Checking that 2 pod created and status is failed
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:152
Sep 11 20:40:13.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3694" for this suite.
Sep 11 20:40:19.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 11 20:40:20.614: INFO: namespace job-3694 deletion completed in 7.44737296s


• [SLOW TEST:22.174 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should fail to exceed backoffLimit
  test/e2e/apps/job.go:226
------------------------------
S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:92
... skipping 133 lines ...
      test/e2e/storage/testsuites/subpath.go:361

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:145
------------------------------
S{"component":"entrypoint","file":"prow/entrypoint/run.go:163","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","time":"2019-09-11T20:40:32Z"}
Traceback (most recent call last):
  File "../test-infra/scenarios/kubernetes_e2e.py", line 778, in <module>
    main(parse_args())
  File "../test-infra/scenarios/kubernetes_e2e.py", line 626, in main
    mode.start(runner_args)
  File "../test-infra/scenarios/kubernetes_e2e.py", line 262, in start
... skipping 13 lines ...