This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-03-28 11:25
Elapsed2h0m
Revisionrelease-0.2
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/bd71b54f-dd8e-4f06-9d6d-63fdde065c3c/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/bd71b54f-dd8e-4f06-9d6d-63fdde065c3c/targets/test

No Test Failures!


Error lines from build-log.txt

... skipping 128 lines ...
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
INFO: Invocation ID: 652ce8b6-8fbd-4b14-a06f-a62ed9839b71
Loading: 
Loading: 0 packages loaded
Loading: 0 packages loaded
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/bazelbuild/rules_go/releases/download/v0.22.2/rules_go-v0.22.2.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/kubernetes/repo-infra/archive/v0.0.3.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
    currently loading: test/e2e ... (3 packages)
Analyzing: 3 targets (3 packages loaded, 0 targets configured)
Analyzing: 3 targets (16 packages loaded, 9 targets configured)
... skipping 1699 lines ...
    ubuntu-1804:
    ubuntu-1804: TASK [sysprep : Truncate shell history] ****************************************
    ubuntu-1804: ok: [default] => (item={u'path': u'/root/.bash_history'})
    ubuntu-1804: ok: [default] => (item={u'path': u'/home/ubuntu/.bash_history'})
    ubuntu-1804:
    ubuntu-1804: PLAY RECAP *********************************************************************
    ubuntu-1804: default                    : ok=60   changed=46   unreachable=0    failed=0    skipped=72   rescued=0    ignored=0
    ubuntu-1804:
==> ubuntu-1804: Deleting instance...
    ubuntu-1804: Instance has been deleted!
==> ubuntu-1804: Creating image...
==> ubuntu-1804: Deleting disk...
    ubuntu-1804: Disk has been deleted!
... skipping 397 lines ...
test1-md-0-585dd5848b-g87j4   gce://k8s-boskos-gce-project-02/us-east4-a/test1-md-0-r7gkm       running
test1-md-0-585dd5848b-rcnml   gce://k8s-boskos-gce-project-02/us-east4-a/test1-md-0-mr4d9       running
node/test1-controlplane-0.c.k8s-boskos-gce-project-02.internal condition met
node/test1-controlplane-2.c.k8s-boskos-gce-project-02.internal condition met
node/test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal condition met
node/test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal condition met
error: timed out waiting for the condition on nodes/test1-controlplane-1.c.k8s-boskos-gce-project-02.internal
Conformance test: not doing test setup.
I0328 11:54:34.821364   24783 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
I0328 11:54:34.822494   24783 e2e.go:124] Starting e2e run "de236f45-13b8-45c2-8a2c-bf9c62a16cd9" on Ginkgo node 1
{"msg":"Test Suite starting","total":283,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1585396473 - Will randomize all specs
Will run 283 of 4993 specs

Mar 28 11:54:34.840: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 28 11:54:34.855: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Mar 28 11:54:35.010: INFO: Condition Ready of node test1-controlplane-1.c.k8s-boskos-gce-project-02.internal is false, but Node is tainted by NodeController with [{node-role.kubernetes.io/master  NoSchedule <nil>} {node.kubernetes.io/not-ready  NoSchedule <nil>} {node.kubernetes.io/network-unavailable  NoSchedule 2020-03-28 11:54:21 +0000 UTC} {node.kubernetes.io/not-ready  NoExecute 2020-03-28 11:54:22 +0000 UTC}]. Failure
Mar 28 11:54:35.010: INFO: Unschedulable nodes:
Mar 28 11:54:35.010: INFO: -> test1-controlplane-1.c.k8s-boskos-gce-project-02.internal Ready=false Network=false Taints=[{node-role.kubernetes.io/master  NoSchedule <nil>} {node.kubernetes.io/not-ready  NoSchedule <nil>} {node.kubernetes.io/network-unavailable  NoSchedule 2020-03-28 11:54:21 +0000 UTC} {node.kubernetes.io/not-ready  NoExecute 2020-03-28 11:54:22 +0000 UTC}] NonblockingTaints:node-role.kubernetes.io/master
Mar 28 11:54:35.010: INFO: ================================
Mar 28 11:55:05.065: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Mar 28 11:55:05.227: INFO: The status of Pod kube-scheduler-test1-controlplane-1.c.k8s-boskos-gce-project-02.internal is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Mar 28 11:55:05.227: INFO: 24 / 25 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar 28 11:55:05.227: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready.
Mar 28 11:55:05.227: INFO: POD                                                                       NODE                                                       PHASE    GRACE  CONDITIONS
Mar 28 11:55:05.227: INFO: kube-scheduler-test1-controlplane-1.c.k8s-boskos-gce-project-02.internal  test1-controlplane-1.c.k8s-boskos-gce-project-02.internal  Pending         []
Mar 28 11:55:05.227: INFO: 
Mar 28 11:55:07.397: INFO: The status of Pod kube-scheduler-test1-controlplane-1.c.k8s-boskos-gce-project-02.internal is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Mar 28 11:55:07.397: INFO: 24 / 25 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
Mar 28 11:55:07.397: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready.
Mar 28 11:55:07.397: INFO: POD                                                                       NODE                                                       PHASE    GRACE  CONDITIONS
Mar 28 11:55:07.397: INFO: kube-scheduler-test1-controlplane-1.c.k8s-boskos-gce-project-02.internal  test1-controlplane-1.c.k8s-boskos-gce-project-02.internal  Pending         []
Mar 28 11:55:07.397: INFO: 
Mar 28 11:55:09.374: INFO: The status of Pod kube-scheduler-test1-controlplane-1.c.k8s-boskos-gce-project-02.internal is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Mar 28 11:55:09.374: INFO: 24 / 25 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
Mar 28 11:55:09.374: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready.
Mar 28 11:55:09.374: INFO: POD                                                                       NODE                                                       PHASE    GRACE  CONDITIONS
Mar 28 11:55:09.374: INFO: kube-scheduler-test1-controlplane-1.c.k8s-boskos-gce-project-02.internal  test1-controlplane-1.c.k8s-boskos-gce-project-02.internal  Pending         []
Mar 28 11:55:09.374: INFO: 
Mar 28 11:55:11.373: INFO: The status of Pod kube-scheduler-test1-controlplane-1.c.k8s-boskos-gce-project-02.internal is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Mar 28 11:55:11.373: INFO: 24 / 25 pods in namespace 'kube-system' are running and ready (6 seconds elapsed)
Mar 28 11:55:11.373: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready.
Mar 28 11:55:11.373: INFO: POD                                                                       NODE                                                       PHASE    GRACE  CONDITIONS
Mar 28 11:55:11.373: INFO: kube-scheduler-test1-controlplane-1.c.k8s-boskos-gce-project-02.internal  test1-controlplane-1.c.k8s-boskos-gce-project-02.internal  Pending         []
Mar 28 11:55:11.373: INFO: 
Mar 28 11:55:13.376: INFO: 25 / 25 pods in namespace 'kube-system' are running and ready (8 seconds elapsed)
... skipping 41 lines ...
  test/e2e/framework/framework.go:175
Mar 28 11:55:21.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6790" for this suite.
STEP: Destroying namespace "webhook-6790-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":283,"completed":1,"skipped":45,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 28 11:55:22.084: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 28 11:55:22.243: INFO: Waiting up to 5m0s for pod "pod-945eb38c-0ead-474c-9dc1-853687d3337c" in namespace "emptydir-4538" to be "Succeeded or Failed"
Mar 28 11:55:22.273: INFO: Pod "pod-945eb38c-0ead-474c-9dc1-853687d3337c": Phase="Pending", Reason="", readiness=false. Elapsed: 30.013415ms
Mar 28 11:55:24.302: INFO: Pod "pod-945eb38c-0ead-474c-9dc1-853687d3337c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058927857s
Mar 28 11:55:26.332: INFO: Pod "pod-945eb38c-0ead-474c-9dc1-853687d3337c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088177962s
STEP: Saw pod success
Mar 28 11:55:26.332: INFO: Pod "pod-945eb38c-0ead-474c-9dc1-853687d3337c" satisfied condition "Succeeded or Failed"
Mar 28 11:55:26.360: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-945eb38c-0ead-474c-9dc1-853687d3337c container test-container: <nil>
STEP: delete the pod
Mar 28 11:55:26.447: INFO: Waiting for pod pod-945eb38c-0ead-474c-9dc1-853687d3337c to disappear
Mar 28 11:55:26.476: INFO: Pod pod-945eb38c-0ead-474c-9dc1-853687d3337c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 28 11:55:26.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4538" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":2,"skipped":50,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 8 lines ...
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 28 11:55:33.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8771" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":283,"completed":3,"skipped":73,"failed":0}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 35 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
W0328 11:55:44.490047   24783 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 28 11:55:44.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2205" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":283,"completed":4,"skipped":76,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 38 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 28 11:55:46.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4596" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":283,"completed":5,"skipped":131,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-203e2f56-d369-4ef3-b9b6-d366520b8ff4
STEP: Creating a pod to test consume secrets
Mar 28 11:55:46.361: INFO: Waiting up to 5m0s for pod "pod-secrets-5894e0fd-750c-4fb2-a303-39ab80377c4b" in namespace "secrets-520" to be "Succeeded or Failed"
Mar 28 11:55:46.389: INFO: Pod "pod-secrets-5894e0fd-750c-4fb2-a303-39ab80377c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 27.516115ms
Mar 28 11:55:48.418: INFO: Pod "pod-secrets-5894e0fd-750c-4fb2-a303-39ab80377c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056687167s
Mar 28 11:55:50.447: INFO: Pod "pod-secrets-5894e0fd-750c-4fb2-a303-39ab80377c4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085778513s
STEP: Saw pod success
Mar 28 11:55:50.447: INFO: Pod "pod-secrets-5894e0fd-750c-4fb2-a303-39ab80377c4b" satisfied condition "Succeeded or Failed"
Mar 28 11:55:50.476: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-secrets-5894e0fd-750c-4fb2-a303-39ab80377c4b container secret-volume-test: <nil>
STEP: delete the pod
Mar 28 11:55:50.558: INFO: Waiting for pod pod-secrets-5894e0fd-750c-4fb2-a303-39ab80377c4b to disappear
Mar 28 11:55:50.586: INFO: Pod pod-secrets-5894e0fd-750c-4fb2-a303-39ab80377c4b no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 28 11:55:50.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-520" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":6,"skipped":141,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
Mar 28 11:55:53.543: INFO: Successfully updated pod "annotationupdate8973a139-a42b-4294-a237-cc1594d1aeb9"
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 28 11:55:55.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1654" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":283,"completed":7,"skipped":154,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 7 lines ...
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 28 11:56:00.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6840" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":283,"completed":8,"skipped":193,"failed":0}

------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 28 11:56:00.762: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar 28 11:56:00.916: INFO: Waiting up to 5m0s for pod "pod-b2e49c0c-7a9a-4ab8-83f4-62d423a75ec4" in namespace "emptydir-4039" to be "Succeeded or Failed"
Mar 28 11:56:00.948: INFO: Pod "pod-b2e49c0c-7a9a-4ab8-83f4-62d423a75ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 31.296596ms
Mar 28 11:56:02.977: INFO: Pod "pod-b2e49c0c-7a9a-4ab8-83f4-62d423a75ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060776111s
Mar 28 11:56:05.006: INFO: Pod "pod-b2e49c0c-7a9a-4ab8-83f4-62d423a75ec4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089704632s
STEP: Saw pod success
Mar 28 11:56:05.006: INFO: Pod "pod-b2e49c0c-7a9a-4ab8-83f4-62d423a75ec4" satisfied condition "Succeeded or Failed"
Mar 28 11:56:05.035: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-b2e49c0c-7a9a-4ab8-83f4-62d423a75ec4 container test-container: <nil>
STEP: delete the pod
Mar 28 11:56:05.106: INFO: Waiting for pod pod-b2e49c0c-7a9a-4ab8-83f4-62d423a75ec4 to disappear
Mar 28 11:56:05.136: INFO: Pod pod-b2e49c0c-7a9a-4ab8-83f4-62d423a75ec4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 28 11:56:05.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4039" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":9,"skipped":193,"failed":0}

------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 28 11:56:05.382: INFO: Waiting up to 5m0s for pod "downwardapi-volume-989e36d8-9189-42d9-8f5e-ae13987ec895" in namespace "projected-4735" to be "Succeeded or Failed"
Mar 28 11:56:05.412: INFO: Pod "downwardapi-volume-989e36d8-9189-42d9-8f5e-ae13987ec895": Phase="Pending", Reason="", readiness=false. Elapsed: 30.012409ms
Mar 28 11:56:07.444: INFO: Pod "downwardapi-volume-989e36d8-9189-42d9-8f5e-ae13987ec895": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061469926s
STEP: Saw pod success
Mar 28 11:56:07.444: INFO: Pod "downwardapi-volume-989e36d8-9189-42d9-8f5e-ae13987ec895" satisfied condition "Succeeded or Failed"
Mar 28 11:56:07.473: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod downwardapi-volume-989e36d8-9189-42d9-8f5e-ae13987ec895 container client-container: <nil>
STEP: delete the pod
Mar 28 11:56:07.546: INFO: Waiting for pod downwardapi-volume-989e36d8-9189-42d9-8f5e-ae13987ec895 to disappear
Mar 28 11:56:07.575: INFO: Pod downwardapi-volume-989e36d8-9189-42d9-8f5e-ae13987ec895 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 28 11:56:07.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4735" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":283,"completed":10,"skipped":193,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 30 lines ...
Mar 28 11:56:34.556: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 28 11:56:34.807: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 28 11:56:34.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6628" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":283,"completed":11,"skipped":227,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 15 lines ...
  test/e2e/framework/framework.go:175
Mar 28 11:56:41.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8782" for this suite.
STEP: Destroying namespace "nsdeletetest-1996" for this suite.
Mar 28 11:56:41.498: INFO: Namespace nsdeletetest-1996 was already deleted
STEP: Destroying namespace "nsdeletetest-6111" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":283,"completed":12,"skipped":233,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 20 lines ...
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 28 11:57:04.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-944" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":283,"completed":13,"skipped":241,"failed":0}
SSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 17 lines ...
Mar 28 11:57:11.508: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 28 11:57:11.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8833" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":283,"completed":14,"skipped":246,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-5301da97-547a-4c59-98e0-2b0739cf63bd
STEP: Creating a pod to test consume configMaps
Mar 28 11:57:11.796: INFO: Waiting up to 5m0s for pod "pod-configmaps-5ca1ebeb-e32f-4aa0-b6c2-fca5474c7a21" in namespace "configmap-2497" to be "Succeeded or Failed"
Mar 28 11:57:11.824: INFO: Pod "pod-configmaps-5ca1ebeb-e32f-4aa0-b6c2-fca5474c7a21": Phase="Pending", Reason="", readiness=false. Elapsed: 27.999522ms
Mar 28 11:57:13.853: INFO: Pod "pod-configmaps-5ca1ebeb-e32f-4aa0-b6c2-fca5474c7a21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.057305164s
STEP: Saw pod success
Mar 28 11:57:13.853: INFO: Pod "pod-configmaps-5ca1ebeb-e32f-4aa0-b6c2-fca5474c7a21" satisfied condition "Succeeded or Failed"
Mar 28 11:57:13.882: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-configmaps-5ca1ebeb-e32f-4aa0-b6c2-fca5474c7a21 container configmap-volume-test: <nil>
STEP: delete the pod
Mar 28 11:57:13.953: INFO: Waiting for pod pod-configmaps-5ca1ebeb-e32f-4aa0-b6c2-fca5474c7a21 to disappear
Mar 28 11:57:13.982: INFO: Pod pod-configmaps-5ca1ebeb-e32f-4aa0-b6c2-fca5474c7a21 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 28 11:57:13.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2497" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":283,"completed":15,"skipped":260,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
Mar 28 11:57:19.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7949" for this suite.
STEP: Destroying namespace "webhook-7949-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":283,"completed":16,"skipped":295,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 2 lines ...
Mar 28 11:57:19.625: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Mar 28 11:57:21.912: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 28 11:57:21.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4181" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":283,"completed":17,"skipped":344,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 29 lines ...
Mar 28 11:57:42.536: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar 28 11:57:42.565: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 28 11:57:42.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1569" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":283,"completed":18,"skipped":357,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
... skipping 42 lines ...
Mar 28 11:57:49.345: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 28 11:57:49.595: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  test/e2e/framework/framework.go:175
Mar 28 11:57:49.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-1628" for this suite.
•{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":19,"skipped":389,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 28 11:57:49.684: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on node default medium
Mar 28 11:57:49.840: INFO: Waiting up to 5m0s for pod "pod-7ddbb522-5b30-41bb-9791-733822273bf9" in namespace "emptydir-6492" to be "Succeeded or Failed"
Mar 28 11:57:49.869: INFO: Pod "pod-7ddbb522-5b30-41bb-9791-733822273bf9": Phase="Pending", Reason="", readiness=false. Elapsed: 28.440058ms
Mar 28 11:57:51.898: INFO: Pod "pod-7ddbb522-5b30-41bb-9791-733822273bf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.057466581s
STEP: Saw pod success
Mar 28 11:57:51.898: INFO: Pod "pod-7ddbb522-5b30-41bb-9791-733822273bf9" satisfied condition "Succeeded or Failed"
Mar 28 11:57:51.926: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-7ddbb522-5b30-41bb-9791-733822273bf9 container test-container: <nil>
STEP: delete the pod
Mar 28 11:57:52.000: INFO: Waiting for pod pod-7ddbb522-5b30-41bb-9791-733822273bf9 to disappear
Mar 28 11:57:52.030: INFO: Pod pod-7ddbb522-5b30-41bb-9791-733822273bf9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 28 11:57:52.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6492" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":20,"skipped":399,"failed":0}
SSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 11 lines ...
Mar 28 11:57:54.364: INFO: Trying to dial the pod
Mar 28 11:57:59.456: INFO: Controller my-hostname-basic-7ba58448-9e44-46b7-ae99-b1d626d037a8: Got expected result from replica 1 [my-hostname-basic-7ba58448-9e44-46b7-ae99-b1d626d037a8-sqnqj]: "my-hostname-basic-7ba58448-9e44-46b7-ae99-b1d626d037a8-sqnqj", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 28 11:57:59.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5275" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":283,"completed":21,"skipped":402,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 39 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 28 11:58:04.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-498" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":283,"completed":22,"skipped":419,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
W0328 11:58:14.710200   24783 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 28 11:58:14.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5280" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":283,"completed":23,"skipped":436,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 28 11:58:14.776: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod with failed condition
STEP: updating the pod
Mar 28 12:00:15.582: INFO: Successfully updated pod "var-expansion-90abd833-b806-4487-9ce8-7a3e1a6005f5"
STEP: waiting for pod running
STEP: deleting the pod gracefully
Mar 28 12:00:17.640: INFO: Deleting pod "var-expansion-90abd833-b806-4487-9ce8-7a3e1a6005f5" in namespace "var-expansion-3113"
Mar 28 12:00:17.674: INFO: Wait up to 5m0s for pod "var-expansion-90abd833-b806-4487-9ce8-7a3e1a6005f5" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 28 12:01:01.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3113" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":283,"completed":24,"skipped":471,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 9 lines ...
STEP: creating pod
Mar 28 12:01:04.092: INFO: Pod pod-hostip-f410d6e2-732e-4e6b-aff2-e44d4190c681 has hostIP: 10.150.0.4
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 28 12:01:04.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3164" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":283,"completed":25,"skipped":478,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 51 lines ...
Mar 28 12:01:20.989: INFO: stderr: ""
Mar 28 12:01:20.989: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 28 12:01:20.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4839" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":283,"completed":26,"skipped":509,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 56 lines ...
Mar 28 12:03:34.477: INFO: Waiting for statefulset status.replicas updated to 0
Mar 28 12:03:34.509: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 28 12:03:34.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1933" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":283,"completed":27,"skipped":518,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 28 12:03:54.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7915" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":283,"completed":28,"skipped":529,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 28 12:03:54.587: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-218d3b58-e07c-42f3-b8b7-61f7e6551b04" in namespace "security-context-test-3900" to be "Succeeded or Failed"
Mar 28 12:03:54.617: INFO: Pod "busybox-privileged-false-218d3b58-e07c-42f3-b8b7-61f7e6551b04": Phase="Pending", Reason="", readiness=false. Elapsed: 30.548732ms
Mar 28 12:03:56.648: INFO: Pod "busybox-privileged-false-218d3b58-e07c-42f3-b8b7-61f7e6551b04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061700278s
Mar 28 12:03:56.648: INFO: Pod "busybox-privileged-false-218d3b58-e07c-42f3-b8b7-61f7e6551b04" satisfied condition "Succeeded or Failed"
Mar 28 12:03:56.694: INFO: Got logs for pod "busybox-privileged-false-218d3b58-e07c-42f3-b8b7-61f7e6551b04": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 28 12:03:56.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3900" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":29,"skipped":548,"failed":0}

------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should have an terminated reason [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 28 12:04:01.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6986" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":283,"completed":30,"skipped":548,"failed":0}
SSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Pods Extended
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  test/e2e/framework/framework.go:175
Mar 28 12:04:01.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8852" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":283,"completed":31,"skipped":554,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 28 12:04:18.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5032" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":283,"completed":32,"skipped":582,"failed":0}
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Mar 28 12:04:20.455: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 28 12:04:20.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2300" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":283,"completed":33,"skipped":588,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 28 12:04:20.795: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e6bb2586-09f4-4118-b975-e9d55dcd042f" in namespace "projected-3646" to be "Succeeded or Failed"
Mar 28 12:04:20.826: INFO: Pod "downwardapi-volume-e6bb2586-09f4-4118-b975-e9d55dcd042f": Phase="Pending", Reason="", readiness=false. Elapsed: 31.284269ms
Mar 28 12:04:22.858: INFO: Pod "downwardapi-volume-e6bb2586-09f4-4118-b975-e9d55dcd042f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063376231s
STEP: Saw pod success
Mar 28 12:04:22.858: INFO: Pod "downwardapi-volume-e6bb2586-09f4-4118-b975-e9d55dcd042f" satisfied condition "Succeeded or Failed"
Mar 28 12:04:22.889: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod downwardapi-volume-e6bb2586-09f4-4118-b975-e9d55dcd042f container client-container: <nil>
STEP: delete the pod
Mar 28 12:04:22.982: INFO: Waiting for pod downwardapi-volume-e6bb2586-09f4-4118-b975-e9d55dcd042f to disappear
Mar 28 12:04:23.014: INFO: Pod downwardapi-volume-e6bb2586-09f4-4118-b975-e9d55dcd042f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 28 12:04:23.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3646" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":283,"completed":34,"skipped":592,"failed":0}
S
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 337 lines ...
Mar 28 12:04:28.583: INFO: Deleting ReplicationController proxy-service-d2rj6 took: 37.402612ms
Mar 28 12:04:28.883: INFO: Terminating ReplicationController proxy-service-d2rj6 pods took: 300.361865ms
[AfterEach] version v1
  test/e2e/framework/framework.go:175
Mar 28 12:04:41.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1615" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":283,"completed":35,"skipped":593,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 34 lines ...
Mar 28 12:04:46.608: INFO: stdout: "service/rm3 exposed\n"
Mar 28 12:04:46.639: INFO: Service rm3 in namespace kubectl-6011 found.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 28 12:04:48.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6011" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":283,"completed":36,"skipped":598,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 28 12:04:48.794: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar 28 12:04:48.957: INFO: Waiting up to 5m0s for pod "pod-db6785d7-bea8-4023-8b26-755d0c1f6dfc" in namespace "emptydir-7102" to be "Succeeded or Failed"
Mar 28 12:04:48.988: INFO: Pod "pod-db6785d7-bea8-4023-8b26-755d0c1f6dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 30.074195ms
Mar 28 12:04:51.020: INFO: Pod "pod-db6785d7-bea8-4023-8b26-755d0c1f6dfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062039974s
STEP: Saw pod success
Mar 28 12:04:51.020: INFO: Pod "pod-db6785d7-bea8-4023-8b26-755d0c1f6dfc" satisfied condition "Succeeded or Failed"
Mar 28 12:04:51.050: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-db6785d7-bea8-4023-8b26-755d0c1f6dfc container test-container: <nil>
STEP: delete the pod
Mar 28 12:04:51.141: INFO: Waiting for pod pod-db6785d7-bea8-4023-8b26-755d0c1f6dfc to disappear
Mar 28 12:04:51.172: INFO: Pod pod-db6785d7-bea8-4023-8b26-755d0c1f6dfc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 28 12:04:51.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7102" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":37,"skipped":633,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Mar 28 12:04:53.557: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 28 12:04:53.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7532" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":283,"completed":38,"skipped":660,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Mar 28 12:04:53.726: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override all
Mar 28 12:04:53.907: INFO: Waiting up to 5m0s for pod "client-containers-79501f85-a501-4845-9208-dab85c9f3155" in namespace "containers-6279" to be "Succeeded or Failed"
Mar 28 12:04:53.939: INFO: Pod "client-containers-79501f85-a501-4845-9208-dab85c9f3155": Phase="Pending", Reason="", readiness=false. Elapsed: 31.465183ms
Mar 28 12:04:55.970: INFO: Pod "client-containers-79501f85-a501-4845-9208-dab85c9f3155": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06231782s
STEP: Saw pod success
Mar 28 12:04:55.970: INFO: Pod "client-containers-79501f85-a501-4845-9208-dab85c9f3155" satisfied condition "Succeeded or Failed"
Mar 28 12:04:56.000: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod client-containers-79501f85-a501-4845-9208-dab85c9f3155 container test-container: <nil>
STEP: delete the pod
Mar 28 12:04:56.079: INFO: Waiting for pod client-containers-79501f85-a501-4845-9208-dab85c9f3155 to disappear
Mar 28 12:04:56.111: INFO: Pod client-containers-79501f85-a501-4845-9208-dab85c9f3155 no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 28 12:04:56.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6279" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":283,"completed":39,"skipped":677,"failed":0}

------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 21 lines ...
Mar 28 12:05:20.446: INFO: The status of Pod test-webserver-8c17d3ad-8ba0-4707-9971-280d22445fea is Running (Ready = true)
Mar 28 12:05:20.476: INFO: Container started at 2020-03-28 12:04:57 +0000 UTC, pod became ready at 2020-03-28 12:05:20 +0000 UTC
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 28 12:05:20.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5829" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":283,"completed":40,"skipped":677,"failed":0}
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 28 lines ...
Mar 28 12:05:43.561: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 28 12:05:43.808: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 28 12:05:43.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-47" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":41,"skipped":679,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 28 12:05:43.899: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 28 12:05:44.067: INFO: Waiting up to 5m0s for pod "downward-api-e5ae036f-44fd-4c93-92fe-28627449d821" in namespace "downward-api-3503" to be "Succeeded or Failed"
Mar 28 12:05:44.097: INFO: Pod "downward-api-e5ae036f-44fd-4c93-92fe-28627449d821": Phase="Pending", Reason="", readiness=false. Elapsed: 30.507704ms
Mar 28 12:05:46.128: INFO: Pod "downward-api-e5ae036f-44fd-4c93-92fe-28627449d821": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061520093s
STEP: Saw pod success
Mar 28 12:05:46.128: INFO: Pod "downward-api-e5ae036f-44fd-4c93-92fe-28627449d821" satisfied condition "Succeeded or Failed"
Mar 28 12:05:46.159: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod downward-api-e5ae036f-44fd-4c93-92fe-28627449d821 container dapi-container: <nil>
STEP: delete the pod
Mar 28 12:05:46.238: INFO: Waiting for pod downward-api-e5ae036f-44fd-4c93-92fe-28627449d821 to disappear
Mar 28 12:05:46.269: INFO: Pod downward-api-e5ae036f-44fd-4c93-92fe-28627449d821 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 28 12:05:46.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3503" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":283,"completed":42,"skipped":691,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-cbc5c4fb-3e03-4a27-ab6c-0fc12e5dcfc3
STEP: Creating a pod to test consume secrets
Mar 28 12:05:46.700: INFO: Waiting up to 5m0s for pod "pod-secrets-f7ff0df8-4f6c-49ae-9adb-70551e8c1d9b" in namespace "secrets-3397" to be "Succeeded or Failed"
Mar 28 12:05:46.732: INFO: Pod "pod-secrets-f7ff0df8-4f6c-49ae-9adb-70551e8c1d9b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.519529ms
Mar 28 12:05:48.763: INFO: Pod "pod-secrets-f7ff0df8-4f6c-49ae-9adb-70551e8c1d9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063507577s
STEP: Saw pod success
Mar 28 12:05:48.764: INFO: Pod "pod-secrets-f7ff0df8-4f6c-49ae-9adb-70551e8c1d9b" satisfied condition "Succeeded or Failed"
Mar 28 12:05:48.794: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-secrets-f7ff0df8-4f6c-49ae-9adb-70551e8c1d9b container secret-volume-test: <nil>
STEP: delete the pod
Mar 28 12:05:48.873: INFO: Waiting for pod pod-secrets-f7ff0df8-4f6c-49ae-9adb-70551e8c1d9b to disappear
Mar 28 12:05:48.905: INFO: Pod pod-secrets-f7ff0df8-4f6c-49ae-9adb-70551e8c1d9b no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 28 12:05:48.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3397" for this suite.
STEP: Destroying namespace "secret-namespace-8060" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":283,"completed":43,"skipped":711,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Mar 28 12:05:51.296: INFO: Initial restart count of pod busybox-5655f1a5-d91f-466e-88bb-935b0a141a4e is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 28 12:09:53.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8510" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":283,"completed":44,"skipped":728,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 28 12:09:53.272: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 28 12:09:53.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-241" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":283,"completed":45,"skipped":733,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 17 lines ...
Mar 28 12:12:24.206: INFO: Restart count of pod container-probe-9161/liveness-a2e2435e-5995-4efc-9201-ac80dbbf0f90 is now 5 (2m28.321276768s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 28 12:12:24.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9161" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":283,"completed":46,"skipped":741,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 16 lines ...
  test/e2e/framework/framework.go:175
Mar 28 12:12:53.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5253" for this suite.
STEP: Destroying namespace "nsdeletetest-7592" for this suite.
Mar 28 12:12:54.027: INFO: Namespace nsdeletetest-7592 was already deleted
STEP: Destroying namespace "nsdeletetest-8978" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":283,"completed":47,"skipped":760,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 3 lines ...
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  test/e2e/common/pods.go:180
[It] should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 28 12:12:56.417: INFO: Waiting up to 5m0s for pod "client-envvars-e19265de-a5ab-45bf-b590-c0efe9159b02" in namespace "pods-6653" to be "Succeeded or Failed"
Mar 28 12:12:56.448: INFO: Pod "client-envvars-e19265de-a5ab-45bf-b590-c0efe9159b02": Phase="Pending", Reason="", readiness=false. Elapsed: 30.868535ms
Mar 28 12:12:58.480: INFO: Pod "client-envvars-e19265de-a5ab-45bf-b590-c0efe9159b02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062388949s
STEP: Saw pod success
Mar 28 12:12:58.480: INFO: Pod "client-envvars-e19265de-a5ab-45bf-b590-c0efe9159b02" satisfied condition "Succeeded or Failed"
Mar 28 12:12:58.511: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod client-envvars-e19265de-a5ab-45bf-b590-c0efe9159b02 container env3cont: <nil>
STEP: delete the pod
Mar 28 12:12:58.603: INFO: Waiting for pod client-envvars-e19265de-a5ab-45bf-b590-c0efe9159b02 to disappear
Mar 28 12:12:58.634: INFO: Pod client-envvars-e19265de-a5ab-45bf-b590-c0efe9159b02 no longer exists
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 28 12:12:58.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6653" for this suite.
•{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":283,"completed":48,"skipped":788,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 28 12:12:58.728: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 28 12:12:58.894: INFO: Waiting up to 5m0s for pod "pod-38c143fe-3e65-4f22-a5fb-7991b36324d1" in namespace "emptydir-8113" to be "Succeeded or Failed"
Mar 28 12:12:58.925: INFO: Pod "pod-38c143fe-3e65-4f22-a5fb-7991b36324d1": Phase="Pending", Reason="", readiness=false. Elapsed: 30.790434ms
Mar 28 12:13:00.957: INFO: Pod "pod-38c143fe-3e65-4f22-a5fb-7991b36324d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062461616s
Mar 28 12:13:02.988: INFO: Pod "pod-38c143fe-3e65-4f22-a5fb-7991b36324d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093397966s
STEP: Saw pod success
Mar 28 12:13:02.988: INFO: Pod "pod-38c143fe-3e65-4f22-a5fb-7991b36324d1" satisfied condition "Succeeded or Failed"
Mar 28 12:13:03.019: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-38c143fe-3e65-4f22-a5fb-7991b36324d1 container test-container: <nil>
STEP: delete the pod
Mar 28 12:13:03.099: INFO: Waiting for pod pod-38c143fe-3e65-4f22-a5fb-7991b36324d1 to disappear
Mar 28 12:13:03.130: INFO: Pod pod-38c143fe-3e65-4f22-a5fb-7991b36324d1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 28 12:13:03.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8113" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":49,"skipped":789,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 8 lines ...
Mar 28 12:13:03.544: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"e94d28d5-c463-42c6-9324-6560091268e4", Controller:(*bool)(0xc00313e0c6), BlockOwnerDeletion:(*bool)(0xc00313e0c7)}}
Mar 28 12:13:03.578: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"84fa07b4-fad1-4378-813d-e43bd84720bb", Controller:(*bool)(0xc00316f87e), BlockOwnerDeletion:(*bool)(0xc00316f87f)}}
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 28 12:13:08.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1716" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":283,"completed":50,"skipped":795,"failed":0}

------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 11 lines ...
Mar 28 12:13:10.097: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 28 12:13:10.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9510" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":283,"completed":51,"skipped":795,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 19 lines ...
Mar 28 12:13:13.407: INFO: stderr: ""
Mar 28 12:13:13.407: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 28 12:13:13.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7937" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":283,"completed":52,"skipped":819,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
Mar 28 12:13:17.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4975" for this suite.
STEP: Destroying namespace "webhook-4975-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":283,"completed":53,"skipped":844,"failed":0}

------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 11 lines ...
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Mar 28 12:13:20.809: INFO: Successfully updated pod "pod-update-activedeadlineseconds-36947443-fa44-45ad-a3da-84d275429912"
Mar 28 12:13:20.809: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-36947443-fa44-45ad-a3da-84d275429912" in namespace "pods-7140" to be "terminated due to deadline exceeded"
Mar 28 12:13:20.844: INFO: Pod "pod-update-activedeadlineseconds-36947443-fa44-45ad-a3da-84d275429912": Phase="Running", Reason="", readiness=true. Elapsed: 35.328219ms
Mar 28 12:13:22.875: INFO: Pod "pod-update-activedeadlineseconds-36947443-fa44-45ad-a3da-84d275429912": Phase="Running", Reason="", readiness=true. Elapsed: 2.066025431s
Mar 28 12:13:24.906: INFO: Pod "pod-update-activedeadlineseconds-36947443-fa44-45ad-a3da-84d275429912": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.097452682s
Mar 28 12:13:24.906: INFO: Pod "pod-update-activedeadlineseconds-36947443-fa44-45ad-a3da-84d275429912" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 28 12:13:24.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7140" for this suite.
•{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":283,"completed":54,"skipped":844,"failed":0}

------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 12 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1193
STEP: Creating statefulset with conflicting port in namespace statefulset-1193
STEP: Waiting until pod test-pod will start running in namespace statefulset-1193
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1193
Mar 28 12:13:29.365: INFO: Observed stateful pod in namespace: statefulset-1193, name: ss-0, uid: 906a1133-5d1e-416e-80cd-ad81bdb7803b, status phase: Pending. Waiting for statefulset controller to delete.
Mar 28 12:13:30.972: INFO: Observed stateful pod in namespace: statefulset-1193, name: ss-0, uid: 906a1133-5d1e-416e-80cd-ad81bdb7803b, status phase: Failed. Waiting for statefulset controller to delete.
Mar 28 12:13:30.981: INFO: Observed stateful pod in namespace: statefulset-1193, name: ss-0, uid: 906a1133-5d1e-416e-80cd-ad81bdb7803b, status phase: Failed. Waiting for statefulset controller to delete.
Mar 28 12:13:30.987: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1193
STEP: Removing pod with conflicting port in namespace statefulset-1193
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1193 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:110
Mar 28 12:13:33.106: INFO: Deleting all statefulset in ns statefulset-1193
Mar 28 12:13:33.137: INFO: Scaling statefulset ss to 0
Mar 28 12:13:43.268: INFO: Waiting for statefulset status.replicas updated to 0
Mar 28 12:13:43.299: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 28 12:13:43.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1193" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":283,"completed":55,"skipped":844,"failed":0}
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Mar 28 12:13:43.492: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test env composition
Mar 28 12:13:43.668: INFO: Waiting up to 5m0s for pod "var-expansion-5820162e-cbdc-42e0-9700-862e964422c2" in namespace "var-expansion-4060" to be "Succeeded or Failed"
Mar 28 12:13:43.703: INFO: Pod "var-expansion-5820162e-cbdc-42e0-9700-862e964422c2": Phase="Pending", Reason="", readiness=false. Elapsed: 34.233171ms
Mar 28 12:13:45.734: INFO: Pod "var-expansion-5820162e-cbdc-42e0-9700-862e964422c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065728128s
STEP: Saw pod success
Mar 28 12:13:45.734: INFO: Pod "var-expansion-5820162e-cbdc-42e0-9700-862e964422c2" satisfied condition "Succeeded or Failed"
Mar 28 12:13:45.765: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod var-expansion-5820162e-cbdc-42e0-9700-862e964422c2 container dapi-container: <nil>
STEP: delete the pod
Mar 28 12:13:45.860: INFO: Waiting for pod var-expansion-5820162e-cbdc-42e0-9700-862e964422c2 to disappear
Mar 28 12:13:45.894: INFO: Pod var-expansion-5820162e-cbdc-42e0-9700-862e964422c2 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 28 12:13:45.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4060" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":283,"completed":56,"skipped":848,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...
Mar 28 12:13:48.479: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 28 12:13:48.740: INFO: Deleting pod dns-6989...
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 28 12:13:48.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6989" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":283,"completed":57,"skipped":860,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Mar 28 12:13:48.878: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override command
Mar 28 12:13:49.038: INFO: Waiting up to 5m0s for pod "client-containers-dce9423c-df02-43e1-bdec-31e6a2c58a2b" in namespace "containers-5269" to be "Succeeded or Failed"
Mar 28 12:13:49.068: INFO: Pod "client-containers-dce9423c-df02-43e1-bdec-31e6a2c58a2b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.387727ms
Mar 28 12:13:51.099: INFO: Pod "client-containers-dce9423c-df02-43e1-bdec-31e6a2c58a2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061532922s
STEP: Saw pod success
Mar 28 12:13:51.099: INFO: Pod "client-containers-dce9423c-df02-43e1-bdec-31e6a2c58a2b" satisfied condition "Succeeded or Failed"
Mar 28 12:13:51.130: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod client-containers-dce9423c-df02-43e1-bdec-31e6a2c58a2b container test-container: <nil>
STEP: delete the pod
Mar 28 12:13:51.224: INFO: Waiting for pod client-containers-dce9423c-df02-43e1-bdec-31e6a2c58a2b to disappear
Mar 28 12:13:51.256: INFO: Pod client-containers-dce9423c-df02-43e1-bdec-31e6a2c58a2b no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 28 12:13:51.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5269" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":283,"completed":58,"skipped":869,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 28 lines ...
  test/e2e/framework/framework.go:175
Mar 28 12:14:08.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6714" for this suite.
STEP: Destroying namespace "webhook-6714-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":283,"completed":59,"skipped":879,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-971d9da9-cc63-43e1-b435-90d8b0a87ef6
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 28 12:14:13.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1995" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":60,"skipped":932,"failed":0}
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 39 lines ...
Mar 28 12:15:35.404: INFO: Waiting for statefulset status.replicas updated to 0
Mar 28 12:15:35.434: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 28 12:15:35.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6488" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":283,"completed":61,"skipped":933,"failed":0}
SS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 28 12:15:35.638: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 28 12:15:35.804: INFO: Waiting up to 5m0s for pod "downward-api-93b66c85-ad41-4712-b055-28789bcbaae2" in namespace "downward-api-1397" to be "Succeeded or Failed"
Mar 28 12:15:35.834: INFO: Pod "downward-api-93b66c85-ad41-4712-b055-28789bcbaae2": Phase="Pending", Reason="", readiness=false. Elapsed: 30.476255ms
Mar 28 12:15:37.865: INFO: Pod "downward-api-93b66c85-ad41-4712-b055-28789bcbaae2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061181211s
STEP: Saw pod success
Mar 28 12:15:37.865: INFO: Pod "downward-api-93b66c85-ad41-4712-b055-28789bcbaae2" satisfied condition "Succeeded or Failed"
Mar 28 12:15:37.896: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod downward-api-93b66c85-ad41-4712-b055-28789bcbaae2 container dapi-container: <nil>
STEP: delete the pod
Mar 28 12:15:37.990: INFO: Waiting for pod downward-api-93b66c85-ad41-4712-b055-28789bcbaae2 to disappear
Mar 28 12:15:38.021: INFO: Pod downward-api-93b66c85-ad41-4712-b055-28789bcbaae2 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 28 12:15:38.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1397" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":283,"completed":62,"skipped":935,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-e61ade46-cecb-45d7-b66c-9bcc5be42253
STEP: Creating a pod to test consume secrets
Mar 28 12:15:38.320: INFO: Waiting up to 5m0s for pod "pod-secrets-636c80d6-f2b5-4d2c-854f-0d582441a524" in namespace "secrets-7694" to be "Succeeded or Failed"
Mar 28 12:15:38.350: INFO: Pod "pod-secrets-636c80d6-f2b5-4d2c-854f-0d582441a524": Phase="Pending", Reason="", readiness=false. Elapsed: 29.905553ms
Mar 28 12:15:40.381: INFO: Pod "pod-secrets-636c80d6-f2b5-4d2c-854f-0d582441a524": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061186481s
STEP: Saw pod success
Mar 28 12:15:40.381: INFO: Pod "pod-secrets-636c80d6-f2b5-4d2c-854f-0d582441a524" satisfied condition "Succeeded or Failed"
Mar 28 12:15:40.412: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-secrets-636c80d6-f2b5-4d2c-854f-0d582441a524 container secret-volume-test: <nil>
STEP: delete the pod
Mar 28 12:15:40.492: INFO: Waiting for pod pod-secrets-636c80d6-f2b5-4d2c-854f-0d582441a524 to disappear
Mar 28 12:15:40.522: INFO: Pod pod-secrets-636c80d6-f2b5-4d2c-854f-0d582441a524 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 28 12:15:40.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7694" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":283,"completed":63,"skipped":953,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Updating configmap configmap-test-upd-5a240511-3d0e-4a6e-a712-471d91846c14
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 28 12:17:00.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8595" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":64,"skipped":954,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
Mar 28 12:17:07.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3197" for this suite.
STEP: Destroying namespace "webhook-3197-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":283,"completed":65,"skipped":958,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 12 lines ...
Mar 28 12:17:10.151: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 28 12:17:10.410: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 28 12:17:10.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6503" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":283,"completed":66,"skipped":989,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 28 12:17:10.675: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4a22c1e8-b9f4-4499-85b1-9600ddde053c" in namespace "downward-api-4408" to be "Succeeded or Failed"
Mar 28 12:17:10.707: INFO: Pod "downwardapi-volume-4a22c1e8-b9f4-4499-85b1-9600ddde053c": Phase="Pending", Reason="", readiness=false. Elapsed: 31.637752ms
Mar 28 12:17:12.738: INFO: Pod "downwardapi-volume-4a22c1e8-b9f4-4499-85b1-9600ddde053c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063437351s
STEP: Saw pod success
Mar 28 12:17:12.738: INFO: Pod "downwardapi-volume-4a22c1e8-b9f4-4499-85b1-9600ddde053c" satisfied condition "Succeeded or Failed"
Mar 28 12:17:12.771: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod downwardapi-volume-4a22c1e8-b9f4-4499-85b1-9600ddde053c container client-container: <nil>
STEP: delete the pod
Mar 28 12:17:12.856: INFO: Waiting for pod downwardapi-volume-4a22c1e8-b9f4-4499-85b1-9600ddde053c to disappear
Mar 28 12:17:12.888: INFO: Pod downwardapi-volume-4a22c1e8-b9f4-4499-85b1-9600ddde053c no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 28 12:17:12.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4408" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":283,"completed":67,"skipped":992,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 22 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 28 12:17:20.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8285" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":283,"completed":68,"skipped":1014,"failed":0}
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Mar 28 12:17:20.610: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 28 12:17:23.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9861" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":283,"completed":69,"skipped":1018,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Mar 28 12:17:34.470: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:34.506: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:34.604: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:34.637: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:34.668: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:34.700: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:34.765: INFO: Lookups using dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2530.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2530.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local jessie_udp@dns-test-service-2.dns-2530.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2530.svc.cluster.local]

Mar 28 12:17:39.798: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:39.831: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:39.862: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:39.894: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:39.989: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:40.021: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:40.055: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:40.087: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:40.152: INFO: Lookups using dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2530.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2530.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local jessie_udp@dns-test-service-2.dns-2530.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2530.svc.cluster.local]

Mar 28 12:17:44.798: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:44.831: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:44.863: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:44.896: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:44.997: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:45.030: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:45.062: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:45.095: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:45.161: INFO: Lookups using dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2530.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2530.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local jessie_udp@dns-test-service-2.dns-2530.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2530.svc.cluster.local]

Mar 28 12:17:49.798: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:49.831: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:49.864: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:49.897: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:49.998: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:50.030: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:50.063: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:50.097: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:50.165: INFO: Lookups using dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2530.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2530.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local jessie_udp@dns-test-service-2.dns-2530.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2530.svc.cluster.local]

Mar 28 12:17:54.798: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:54.830: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:54.861: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:54.894: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:54.991: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:55.024: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:55.057: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:55.089: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2530.svc.cluster.local from pod dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e: the server could not find the requested resource (get pods dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e)
Mar 28 12:17:55.154: INFO: Lookups using dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2530.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2530.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2530.svc.cluster.local jessie_udp@dns-test-service-2.dns-2530.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2530.svc.cluster.local]

Mar 28 12:18:00.158: INFO: DNS probes using dns-2530/dns-test-cecff7e4-4356-483d-86f8-2d6772c6a72e succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 28 12:18:00.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2530" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":283,"completed":70,"skipped":1056,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-9ebd53e3-7941-44ff-ac9d-d280d6a9870d
STEP: Creating a pod to test consume secrets
Mar 28 12:18:00.543: INFO: Waiting up to 5m0s for pod "pod-secrets-f1be068c-ec37-40d3-a848-6048b309eabf" in namespace "secrets-5538" to be "Succeeded or Failed"
Mar 28 12:18:00.578: INFO: Pod "pod-secrets-f1be068c-ec37-40d3-a848-6048b309eabf": Phase="Pending", Reason="", readiness=false. Elapsed: 34.845023ms
Mar 28 12:18:02.609: INFO: Pod "pod-secrets-f1be068c-ec37-40d3-a848-6048b309eabf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065659853s
STEP: Saw pod success
Mar 28 12:18:02.609: INFO: Pod "pod-secrets-f1be068c-ec37-40d3-a848-6048b309eabf" satisfied condition "Succeeded or Failed"
Mar 28 12:18:02.640: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-secrets-f1be068c-ec37-40d3-a848-6048b309eabf container secret-volume-test: <nil>
STEP: delete the pod
Mar 28 12:18:02.722: INFO: Waiting for pod pod-secrets-f1be068c-ec37-40d3-a848-6048b309eabf to disappear
Mar 28 12:18:02.757: INFO: Pod pod-secrets-f1be068c-ec37-40d3-a848-6048b309eabf no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 28 12:18:02.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5538" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":71,"skipped":1064,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 36 lines ...

W0328 12:18:03.846037   24783 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 28 12:18:03.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5019" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":283,"completed":72,"skipped":1110,"failed":0}

------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-73c59d9c-9c47-493d-9bb7-b5a74495bf01
STEP: Creating a pod to test consume secrets
Mar 28 12:18:04.113: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-74acce77-6370-488d-8086-d9620c004fc6" in namespace "projected-2899" to be "Succeeded or Failed"
Mar 28 12:18:04.143: INFO: Pod "pod-projected-secrets-74acce77-6370-488d-8086-d9620c004fc6": Phase="Pending", Reason="", readiness=false. Elapsed: 30.546341ms
Mar 28 12:18:06.174: INFO: Pod "pod-projected-secrets-74acce77-6370-488d-8086-d9620c004fc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061309615s
STEP: Saw pod success
Mar 28 12:18:06.174: INFO: Pod "pod-projected-secrets-74acce77-6370-488d-8086-d9620c004fc6" satisfied condition "Succeeded or Failed"
Mar 28 12:18:06.206: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-projected-secrets-74acce77-6370-488d-8086-d9620c004fc6 container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 28 12:18:06.286: INFO: Waiting for pod pod-projected-secrets-74acce77-6370-488d-8086-d9620c004fc6 to disappear
Mar 28 12:18:06.318: INFO: Pod pod-projected-secrets-74acce77-6370-488d-8086-d9620c004fc6 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 28 12:18:06.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2899" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":73,"skipped":1110,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 17 lines ...
Mar 28 12:18:06.833: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7681 /api/v1/namespaces/watch-7681/configmaps/e2e-watch-test-watch-closed e8e7f279-d0a8-443f-95db-894efe1b37d7 8160 0 2020-03-28 12:18:06 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 28 12:18:06.833: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7681 /api/v1/namespaces/watch-7681/configmaps/e2e-watch-test-watch-closed e8e7f279-d0a8-443f-95db-894efe1b37d7 8161 0 2020-03-28 12:18:06 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 28 12:18:06.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7681" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":283,"completed":74,"skipped":1131,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 162 lines ...
Mar 28 12:18:09.476: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Mar 28 12:18:09.476: INFO: Waiting for all frontend pods to be Running.
Mar 28 12:18:14.527: INFO: Waiting for frontend to serve content.
Mar 28 12:18:14.572: INFO: Trying to add a new entry to the guestbook.
Mar 28 12:18:14.609: INFO: Verifying that added entry can be retrieved.
Mar 28 12:18:14.645: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""}
STEP: using delete to clean up resources
Mar 28 12:18:19.686: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig delete --grace-period=0 --force -f - --namespace=kubectl-5633'
Mar 28 12:18:19.950: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 28 12:18:19.950: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Mar 28 12:18:19.950: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig delete --grace-period=0 --force -f - --namespace=kubectl-5633'
... skipping 16 lines ...
Mar 28 12:18:21.201: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 28 12:18:21.201: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 28 12:18:21.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5633" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":283,"completed":75,"skipped":1159,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 28 12:18:26.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5974" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":283,"completed":76,"skipped":1181,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 28 12:18:37.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1008" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":283,"completed":77,"skipped":1221,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 39 lines ...
Mar 28 12:18:44.318: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig explain e2e-test-crd-publish-openapi-1763-crds.spec'
Mar 28 12:18:44.593: INFO: stderr: ""
Mar 28 12:18:44.593: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1763-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Mar 28 12:18:44.593: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig explain e2e-test-crd-publish-openapi-1763-crds.spec.bars'
Mar 28 12:18:44.865: INFO: stderr: ""
Mar 28 12:18:44.865: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1763-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Mar 28 12:18:44.865: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig explain e2e-test-crd-publish-openapi-1763-crds.spec.bars2'
Mar 28 12:18:45.253: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 28 12:18:48.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8009" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":283,"completed":78,"skipped":1232,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 9 lines ...
STEP: Updating configmap projected-configmap-test-upd-80e2720d-33b7-4a32-9500-03658e301cdd
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 28 12:20:05.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8135" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":79,"skipped":1263,"failed":0}
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Mar 28 12:20:08.384: INFO: Initial restart count of pod test-webserver-f37af5c6-75ce-4c80-80e1-864872fbb62b is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 28 12:24:10.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7313" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":283,"completed":80,"skipped":1264,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 8 lines ...
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:175
Mar 28 12:24:12.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-4707" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":283,"completed":81,"skipped":1306,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 10 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 28 12:24:12.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3107" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":283,"completed":82,"skipped":1309,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-projected-p6qw
STEP: Creating a pod to test atomic-volume-subpath
Mar 28 12:24:13.248: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-p6qw" in namespace "subpath-8086" to be "Succeeded or Failed"
Mar 28 12:24:13.280: INFO: Pod "pod-subpath-test-projected-p6qw": Phase="Pending", Reason="", readiness=false. Elapsed: 32.229156ms
Mar 28 12:24:15.311: INFO: Pod "pod-subpath-test-projected-p6qw": Phase="Running", Reason="", readiness=true. Elapsed: 2.062874191s
Mar 28 12:24:17.342: INFO: Pod "pod-subpath-test-projected-p6qw": Phase="Running", Reason="", readiness=true. Elapsed: 4.0944971s
Mar 28 12:24:19.374: INFO: Pod "pod-subpath-test-projected-p6qw": Phase="Running", Reason="", readiness=true. Elapsed: 6.125823364s
Mar 28 12:24:21.404: INFO: Pod "pod-subpath-test-projected-p6qw": Phase="Running", Reason="", readiness=true. Elapsed: 8.156782362s
Mar 28 12:24:23.436: INFO: Pod "pod-subpath-test-projected-p6qw": Phase="Running", Reason="", readiness=true. Elapsed: 10.187900197s
Mar 28 12:24:25.467: INFO: Pod "pod-subpath-test-projected-p6qw": Phase="Running", Reason="", readiness=true. Elapsed: 12.218865762s
Mar 28 12:24:27.497: INFO: Pod "pod-subpath-test-projected-p6qw": Phase="Running", Reason="", readiness=true. Elapsed: 14.249717527s
Mar 28 12:24:29.529: INFO: Pod "pod-subpath-test-projected-p6qw": Phase="Running", Reason="", readiness=true. Elapsed: 16.280878258s
Mar 28 12:24:31.560: INFO: Pod "pod-subpath-test-projected-p6qw": Phase="Running", Reason="", readiness=true. Elapsed: 18.311863176s
Mar 28 12:24:33.590: INFO: Pod "pod-subpath-test-projected-p6qw": Phase="Running", Reason="", readiness=true. Elapsed: 20.342341348s
Mar 28 12:24:35.621: INFO: Pod "pod-subpath-test-projected-p6qw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.373500567s
STEP: Saw pod success
Mar 28 12:24:35.621: INFO: Pod "pod-subpath-test-projected-p6qw" satisfied condition "Succeeded or Failed"
Mar 28 12:24:35.652: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-subpath-test-projected-p6qw container test-container-subpath-projected-p6qw: <nil>
STEP: delete the pod
Mar 28 12:24:35.747: INFO: Waiting for pod pod-subpath-test-projected-p6qw to disappear
Mar 28 12:24:35.779: INFO: Pod pod-subpath-test-projected-p6qw no longer exists
STEP: Deleting pod pod-subpath-test-projected-p6qw
Mar 28 12:24:35.779: INFO: Deleting pod "pod-subpath-test-projected-p6qw" in namespace "subpath-8086"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 28 12:24:35.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8086" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":283,"completed":83,"skipped":1323,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 14 lines ...
STEP: verifying the updated pod is in kubernetes
Mar 28 12:24:38.818: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 28 12:24:38.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8941" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":283,"completed":84,"skipped":1363,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 75 lines ...
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-s7mtx webserver-deployment-595b5b9587- deployment-4831 /api/v1/namespaces/deployment-4831/pods/webserver-deployment-595b5b9587-s7mtx ea96387f-b132-45c6-b46c-315b1bf5f226 9836 0 2020-03-28 12:24:46 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9b9114e7-9978-4e6c-b06c-c28754e0fb49 0xc003405690 0xc003405691}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pqw5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pqw5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pqw5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 28 12:24:47.019: INFO: Pod "webserver-deployment-595b5b9587-xl2t8" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xl2t8 webserver-deployment-595b5b9587- deployment-4831 /api/v1/namespaces/deployment-4831/pods/webserver-deployment-595b5b9587-xl2t8 fcb04502-6243-4491-9cd1-3be032185cd1 9622 0 2020-03-28 12:24:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:192.168.154.188/32] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9b9114e7-9978-4e6c-b06c-c28754e0fb49 0xc0034057a0 0xc0034057a1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pqw5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pqw5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pqw5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:192.168.154.188,StartTime:2020-03-28 12:24:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-28 12:24:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1c37dae60844c8b44db74a43566e4c31140657f7a62593bee1c1c4782290a074,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.154.188,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 28 12:24:47.019: INFO: Pod "webserver-deployment-595b5b9587-zcpk8" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zcpk8 webserver-deployment-595b5b9587- deployment-4831 /api/v1/namespaces/deployment-4831/pods/webserver-deployment-595b5b9587-zcpk8 9ed06651-3463-4978-9484-4bb5f210d75d 9853 0 2020-03-28 12:24:46 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9b9114e7-9978-4e6c-b06c-c28754e0fb49 0xc003405900 0xc003405901}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pqw5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pqw5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pqw5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:,StartTime:2020-03-28 12:24:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 28 12:24:47.019: INFO: Pod "webserver-deployment-c7997dcc8-24kgh" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-24kgh webserver-deployment-c7997dcc8- deployment-4831 /api/v1/namespaces/deployment-4831/pods/webserver-deployment-c7997dcc8-24kgh 0121d4f9-a3dc-4b6e-bc44-67dc9e91da8d 9769 0 2020-03-28 12:24:44 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.154.189/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c85dab73-7fb5-420d-8b31-e843363c5773 0xc003405a40 0xc003405a41}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pqw5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pqw5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pqw5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:192.168.154.189,StartTime:2020-03-28 12:24:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.154.189,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 28 12:24:47.020: INFO: Pod "webserver-deployment-c7997dcc8-5mstr" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5mstr webserver-deployment-c7997dcc8- deployment-4831 /api/v1/namespaces/deployment-4831/pods/webserver-deployment-c7997dcc8-5mstr 25661f6a-bddb-49be-aaec-7b74feaf71b1 9786 0 2020-03-28 12:24:44 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.154.191/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c85dab73-7fb5-420d-8b31-e843363c5773 0xc003405bf0 0xc003405bf1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pqw5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pqw5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pqw5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:192.168.154.191,StartTime:2020-03-28 12:24:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.154.191,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 28 12:24:47.020: INFO: Pod "webserver-deployment-c7997dcc8-8fdwc" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8fdwc webserver-deployment-c7997dcc8- deployment-4831 /api/v1/namespaces/deployment-4831/pods/webserver-deployment-c7997dcc8-8fdwc 53e02bb9-38d1-48a8-ae82-b34a5fef7c38 9822 0 2020-03-28 12:24:46 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c85dab73-7fb5-420d-8b31-e843363c5773 0xc003405d80 0xc003405d81}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pqw5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pqw5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pqw5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 28 12:24:47.020: INFO: Pod "webserver-deployment-c7997dcc8-9rhs4" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9rhs4 webserver-deployment-c7997dcc8- deployment-4831 /api/v1/namespaces/deployment-4831/pods/webserver-deployment-c7997dcc8-9rhs4 88516903-b12d-4253-b2a8-02fb4504dfca 9821 0 2020-03-28 12:24:46 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c85dab73-7fb5-420d-8b31-e843363c5773 0xc003405e90 0xc003405e91}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pqw5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pqw5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pqw5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 28 12:24:47.020: INFO: Pod "webserver-deployment-c7997dcc8-brkgp" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-brkgp webserver-deployment-c7997dcc8- deployment-4831 /api/v1/namespaces/deployment-4831/pods/webserver-deployment-c7997dcc8-brkgp c18fe621-8fdf-43e8-931f-7e2f94b9f686 9774 0 2020-03-28 12:24:44 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.154.190/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c85dab73-7fb5-420d-8b31-e843363c5773 0xc003405fa0 0xc003405fa1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pqw5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pqw5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pqw5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:192.168.154.190,StartTime:2020-03-28 12:24:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.154.190,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 28 12:24:47.020: INFO: Pod "webserver-deployment-c7997dcc8-ckz5n" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ckz5n webserver-deployment-c7997dcc8- deployment-4831 /api/v1/namespaces/deployment-4831/pods/webserver-deployment-c7997dcc8-ckz5n b3325108-be47-4c91-96b7-65dedbafec82 9866 0 2020-03-28 12:24:44 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.172.136/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c85dab73-7fb5-420d-8b31-e843363c5773 0xc0033b2130 0xc0033b2131}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pqw5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pqw5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pqw5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:192.168.172.136,StartTime:2020-03-28 12:24:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "webserver:404",},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.172.136,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 28 12:24:47.020: INFO: Pod "webserver-deployment-c7997dcc8-dm5gp" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dm5gp webserver-deployment-c7997dcc8- deployment-4831 /api/v1/namespaces/deployment-4831/pods/webserver-deployment-c7997dcc8-dm5gp 3f152540-aec6-4132-b53b-4a64a36c9a48 9820 0 2020-03-28 12:24:46 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c85dab73-7fb5-420d-8b31-e843363c5773 0xc0033b22d0 0xc0033b22d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pqw5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pqw5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pqw5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 28 12:24:47.021: INFO: Pod "webserver-deployment-c7997dcc8-lf2t9" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lf2t9 webserver-deployment-c7997dcc8- deployment-4831 /api/v1/namespaces/deployment-4831/pods/webserver-deployment-c7997dcc8-lf2t9 9282d09f-16df-4897-9ffd-f83cdb5a5e2b 9808 0 2020-03-28 12:24:46 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c85dab73-7fb5-420d-8b31-e843363c5773 0xc0033b23e0 0xc0033b23e1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pqw5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pqw5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pqw5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 8 lines ...
Mar 28 12:24:47.021: INFO: Pod "webserver-deployment-c7997dcc8-qs2kj" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qs2kj webserver-deployment-c7997dcc8- deployment-4831 /api/v1/namespaces/deployment-4831/pods/webserver-deployment-c7997dcc8-qs2kj d676e439-e52c-47e7-85ef-5d424145c9fe 9823 0 2020-03-28 12:24:46 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c85dab73-7fb5-420d-8b31-e843363c5773 0xc0033b2a00 0xc0033b2a01}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pqw5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pqw5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pqw5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:24:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 28 12:24:47.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4831" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":283,"completed":85,"skipped":1386,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should be possible to delete [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 28 12:24:47.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3549" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":283,"completed":86,"skipped":1407,"failed":0}
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-fw42
STEP: Creating a pod to test atomic-volume-subpath
Mar 28 12:24:47.603: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-fw42" in namespace "subpath-9615" to be "Succeeded or Failed"
Mar 28 12:24:47.634: INFO: Pod "pod-subpath-test-configmap-fw42": Phase="Pending", Reason="", readiness=false. Elapsed: 31.117225ms
Mar 28 12:24:49.672: INFO: Pod "pod-subpath-test-configmap-fw42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068348769s
Mar 28 12:24:51.702: INFO: Pod "pod-subpath-test-configmap-fw42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098965666s
Mar 28 12:24:53.741: INFO: Pod "pod-subpath-test-configmap-fw42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138173881s
Mar 28 12:24:55.772: INFO: Pod "pod-subpath-test-configmap-fw42": Phase="Running", Reason="", readiness=true. Elapsed: 8.169010361s
Mar 28 12:24:57.803: INFO: Pod "pod-subpath-test-configmap-fw42": Phase="Running", Reason="", readiness=true. Elapsed: 10.199810675s
... skipping 3 lines ...
Mar 28 12:25:05.926: INFO: Pod "pod-subpath-test-configmap-fw42": Phase="Running", Reason="", readiness=true. Elapsed: 18.322810799s
Mar 28 12:25:07.957: INFO: Pod "pod-subpath-test-configmap-fw42": Phase="Running", Reason="", readiness=true. Elapsed: 20.353875611s
Mar 28 12:25:09.989: INFO: Pod "pod-subpath-test-configmap-fw42": Phase="Running", Reason="", readiness=true. Elapsed: 22.385606406s
Mar 28 12:25:12.020: INFO: Pod "pod-subpath-test-configmap-fw42": Phase="Running", Reason="", readiness=true. Elapsed: 24.416789473s
Mar 28 12:25:14.051: INFO: Pod "pod-subpath-test-configmap-fw42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.447228787s
STEP: Saw pod success
Mar 28 12:25:14.051: INFO: Pod "pod-subpath-test-configmap-fw42" satisfied condition "Succeeded or Failed"
Mar 28 12:25:14.082: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-subpath-test-configmap-fw42 container test-container-subpath-configmap-fw42: <nil>
STEP: delete the pod
Mar 28 12:25:14.175: INFO: Waiting for pod pod-subpath-test-configmap-fw42 to disappear
Mar 28 12:25:14.206: INFO: Pod pod-subpath-test-configmap-fw42 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-fw42
Mar 28 12:25:14.206: INFO: Deleting pod "pod-subpath-test-configmap-fw42" in namespace "subpath-9615"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 28 12:25:14.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9615" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":283,"completed":87,"skipped":1409,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Mar 28 12:25:36.931: INFO: Restart count of pod container-probe-56/liveness-a7595271-a529-42cf-9097-6b3ca01bd982 is now 1 (20.3396039s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 28 12:25:36.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-56" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":283,"completed":88,"skipped":1420,"failed":0}
SSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 28 12:25:37.065: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap that has name configmap-test-emptyKey-60003eec-f7b2-44b5-b9fc-6d096d01d78d
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Mar 28 12:25:37.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4004" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":283,"completed":89,"skipped":1426,"failed":0}
SSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-342/configmap-test-0ea0df8c-9740-4b25-8459-768576c63da6
STEP: Creating a pod to test consume configMaps
Mar 28 12:25:37.490: INFO: Waiting up to 5m0s for pod "pod-configmaps-ca2c33a7-429d-41db-88a7-6625cacedd56" in namespace "configmap-342" to be "Succeeded or Failed"
Mar 28 12:25:37.521: INFO: Pod "pod-configmaps-ca2c33a7-429d-41db-88a7-6625cacedd56": Phase="Pending", Reason="", readiness=false. Elapsed: 30.67395ms
Mar 28 12:25:39.551: INFO: Pod "pod-configmaps-ca2c33a7-429d-41db-88a7-6625cacedd56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061506913s
STEP: Saw pod success
Mar 28 12:25:39.551: INFO: Pod "pod-configmaps-ca2c33a7-429d-41db-88a7-6625cacedd56" satisfied condition "Succeeded or Failed"
Mar 28 12:25:39.582: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-configmaps-ca2c33a7-429d-41db-88a7-6625cacedd56 container env-test: <nil>
STEP: delete the pod
Mar 28 12:25:39.657: INFO: Waiting for pod pod-configmaps-ca2c33a7-429d-41db-88a7-6625cacedd56 to disappear
Mar 28 12:25:39.689: INFO: Pod pod-configmaps-ca2c33a7-429d-41db-88a7-6625cacedd56 no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Mar 28 12:25:39.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-342" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":283,"completed":90,"skipped":1431,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 28 12:25:39.783: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar 28 12:25:39.954: INFO: Waiting up to 5m0s for pod "pod-92dee85e-c57b-41bb-a5a7-d6cd26d56b04" in namespace "emptydir-9131" to be "Succeeded or Failed"
Mar 28 12:25:39.985: INFO: Pod "pod-92dee85e-c57b-41bb-a5a7-d6cd26d56b04": Phase="Pending", Reason="", readiness=false. Elapsed: 30.939749ms
Mar 28 12:25:42.016: INFO: Pod "pod-92dee85e-c57b-41bb-a5a7-d6cd26d56b04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061867585s
STEP: Saw pod success
Mar 28 12:25:42.016: INFO: Pod "pod-92dee85e-c57b-41bb-a5a7-d6cd26d56b04" satisfied condition "Succeeded or Failed"
Mar 28 12:25:42.046: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-92dee85e-c57b-41bb-a5a7-d6cd26d56b04 container test-container: <nil>
STEP: delete the pod
Mar 28 12:25:42.131: INFO: Waiting for pod pod-92dee85e-c57b-41bb-a5a7-d6cd26d56b04 to disappear
Mar 28 12:25:42.162: INFO: Pod pod-92dee85e-c57b-41bb-a5a7-d6cd26d56b04 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 28 12:25:42.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9131" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":91,"skipped":1433,"failed":0}
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Mar 28 12:25:42.255: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's command
Mar 28 12:25:42.423: INFO: Waiting up to 5m0s for pod "var-expansion-b958d215-7e9f-44f9-a0e6-fb03bf118a37" in namespace "var-expansion-3055" to be "Succeeded or Failed"
Mar 28 12:25:42.458: INFO: Pod "var-expansion-b958d215-7e9f-44f9-a0e6-fb03bf118a37": Phase="Pending", Reason="", readiness=false. Elapsed: 34.788939ms
Mar 28 12:25:44.488: INFO: Pod "var-expansion-b958d215-7e9f-44f9-a0e6-fb03bf118a37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065451021s
STEP: Saw pod success
Mar 28 12:25:44.488: INFO: Pod "var-expansion-b958d215-7e9f-44f9-a0e6-fb03bf118a37" satisfied condition "Succeeded or Failed"
Mar 28 12:25:44.519: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod var-expansion-b958d215-7e9f-44f9-a0e6-fb03bf118a37 container dapi-container: <nil>
STEP: delete the pod
Mar 28 12:25:44.601: INFO: Waiting for pod var-expansion-b958d215-7e9f-44f9-a0e6-fb03bf118a37 to disappear
Mar 28 12:25:44.631: INFO: Pod var-expansion-b958d215-7e9f-44f9-a0e6-fb03bf118a37 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 28 12:25:44.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3055" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":283,"completed":92,"skipped":1437,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
  test/e2e/framework/framework.go:175
Mar 28 12:25:48.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4845" for this suite.
STEP: Destroying namespace "webhook-4845-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":283,"completed":93,"skipped":1442,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 22 lines ...
Mar 28 12:26:19.772: INFO: Waiting for statefulset status.replicas updated to 0
Mar 28 12:26:19.804: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 28 12:26:19.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1751" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":283,"completed":94,"skipped":1462,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 20 lines ...
Mar 28 12:26:24.557: INFO: Pod "test-cleanup-deployment-577c77b589-698zc" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-698zc test-cleanup-deployment-577c77b589- deployment-352 /api/v1/namespaces/deployment-352/pods/test-cleanup-deployment-577c77b589-698zc 8de9b4eb-42a0-4a35-9d23-88353b7842a9 10827 0 2020-03-28 12:26:22 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:577c77b589] map[cni.projectcalico.org/podIP:192.168.172.154/32] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 0a37e484-eb9a-441b-b4a2-2311321d601d 0xc001b95cb7 0xc001b95cb8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xkllr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xkllr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xkllr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:26:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:26:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:26:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:26:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:192.168.172.154,StartTime:2020-03-28 12:26:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-28 12:26:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://9067c198e4aba5a87f6c2e94bc8c3f1b07ab5dfac3e54aa539331ce1b606f7bf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.172.154,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 28 12:26:24.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-352" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":283,"completed":95,"skipped":1476,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
... skipping 11 lines ...
Mar 28 12:26:26.900: INFO: Trying to dial the pod
Mar 28 12:26:31.996: INFO: Controller my-hostname-basic-388390c7-8dff-4fa5-b477-8b8c1fe59ffb: Got expected result from replica 1 [my-hostname-basic-388390c7-8dff-4fa5-b477-8b8c1fe59ffb-6rfhb]: "my-hostname-basic-388390c7-8dff-4fa5-b477-8b8c1fe59ffb-6rfhb", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:175
Mar 28 12:26:31.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9029" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":283,"completed":96,"skipped":1489,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 19 lines ...
Mar 28 12:26:35.504: INFO: Pod "adopt-release-dmbds": Phase="Running", Reason="", readiness=true. Elapsed: 32.970133ms
Mar 28 12:26:35.504: INFO: Pod "adopt-release-dmbds" satisfied condition "released"
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Mar 28 12:26:35.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3252" for this suite.
•{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":283,"completed":97,"skipped":1500,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
Mar 28 12:26:42.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7061" for this suite.
STEP: Destroying namespace "webhook-7061-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":283,"completed":98,"skipped":1516,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-123daf19-0481-4041-8872-deaa4a83f26c
STEP: Creating a pod to test consume secrets
Mar 28 12:26:43.172: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4c38e5f6-e889-4335-ada7-f5cf07962b2f" in namespace "projected-4464" to be "Succeeded or Failed"
Mar 28 12:26:43.204: INFO: Pod "pod-projected-secrets-4c38e5f6-e889-4335-ada7-f5cf07962b2f": Phase="Pending", Reason="", readiness=false. Elapsed: 31.575505ms
Mar 28 12:26:45.235: INFO: Pod "pod-projected-secrets-4c38e5f6-e889-4335-ada7-f5cf07962b2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063161498s
STEP: Saw pod success
Mar 28 12:26:45.236: INFO: Pod "pod-projected-secrets-4c38e5f6-e889-4335-ada7-f5cf07962b2f" satisfied condition "Succeeded or Failed"
Mar 28 12:26:45.266: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-projected-secrets-4c38e5f6-e889-4335-ada7-f5cf07962b2f container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 28 12:26:45.349: INFO: Waiting for pod pod-projected-secrets-4c38e5f6-e889-4335-ada7-f5cf07962b2f to disappear
Mar 28 12:26:45.379: INFO: Pod pod-projected-secrets-4c38e5f6-e889-4335-ada7-f5cf07962b2f no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 28 12:26:45.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4464" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":99,"skipped":1554,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 28 12:26:56.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6530" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":283,"completed":100,"skipped":1579,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 28 12:27:13.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4946" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":283,"completed":101,"skipped":1582,"failed":0}
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-337bf034-13c3-4723-813f-50903b16175c
STEP: Creating a pod to test consume configMaps
Mar 28 12:27:13.866: INFO: Waiting up to 5m0s for pod "pod-configmaps-ffb102d3-37ac-46d9-bd07-80bf09e48c42" in namespace "configmap-7718" to be "Succeeded or Failed"
Mar 28 12:27:13.897: INFO: Pod "pod-configmaps-ffb102d3-37ac-46d9-bd07-80bf09e48c42": Phase="Pending", Reason="", readiness=false. Elapsed: 31.207648ms
Mar 28 12:27:15.929: INFO: Pod "pod-configmaps-ffb102d3-37ac-46d9-bd07-80bf09e48c42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062776212s
STEP: Saw pod success
Mar 28 12:27:15.929: INFO: Pod "pod-configmaps-ffb102d3-37ac-46d9-bd07-80bf09e48c42" satisfied condition "Succeeded or Failed"
Mar 28 12:27:15.959: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-configmaps-ffb102d3-37ac-46d9-bd07-80bf09e48c42 container configmap-volume-test: <nil>
STEP: delete the pod
Mar 28 12:27:16.036: INFO: Waiting for pod pod-configmaps-ffb102d3-37ac-46d9-bd07-80bf09e48c42 to disappear
Mar 28 12:27:16.067: INFO: Pod pod-configmaps-ffb102d3-37ac-46d9-bd07-80bf09e48c42 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 28 12:27:16.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7718" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":102,"skipped":1586,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
W0328 12:27:56.531643   24783 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 28 12:27:56.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7958" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":283,"completed":103,"skipped":1622,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 24 lines ...
Mar 28 12:27:57.768: INFO: created pod pod-service-account-nomountsa-nomountspec
Mar 28 12:27:57.768: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
Mar 28 12:27:57.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5485" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":283,"completed":104,"skipped":1652,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 9 lines ...
Mar 28 12:27:58.061: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 28 12:27:58.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3140" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":283,"completed":105,"skipped":1663,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 28 12:27:58.422: INFO: Waiting up to 5m0s for pod "downwardapi-volume-487dbfe6-4697-4cf9-924f-85825a3b822d" in namespace "projected-9849" to be "Succeeded or Failed"
Mar 28 12:27:58.452: INFO: Pod "downwardapi-volume-487dbfe6-4697-4cf9-924f-85825a3b822d": Phase="Pending", Reason="", readiness=false. Elapsed: 29.894526ms
Mar 28 12:28:00.483: INFO: Pod "downwardapi-volume-487dbfe6-4697-4cf9-924f-85825a3b822d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060862774s
Mar 28 12:28:02.514: INFO: Pod "downwardapi-volume-487dbfe6-4697-4cf9-924f-85825a3b822d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091898313s
STEP: Saw pod success
Mar 28 12:28:02.514: INFO: Pod "downwardapi-volume-487dbfe6-4697-4cf9-924f-85825a3b822d" satisfied condition "Succeeded or Failed"
Mar 28 12:28:02.545: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod downwardapi-volume-487dbfe6-4697-4cf9-924f-85825a3b822d container client-container: <nil>
STEP: delete the pod
Mar 28 12:28:02.642: INFO: Waiting for pod downwardapi-volume-487dbfe6-4697-4cf9-924f-85825a3b822d to disappear
Mar 28 12:28:02.673: INFO: Pod downwardapi-volume-487dbfe6-4697-4cf9-924f-85825a3b822d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 28 12:28:02.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9849" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":106,"skipped":1677,"failed":0}

------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] LimitRange
... skipping 31 lines ...
Mar 28 12:28:10.432: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  test/e2e/framework/framework.go:175
Mar 28 12:28:10.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-388" for this suite.
•{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":283,"completed":107,"skipped":1677,"failed":0}
SSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 12 lines ...
STEP: reading a file in the container
Mar 28 12:28:14.317: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl exec --namespace=svcaccounts-1723 pod-service-account-446623b9-8e64-4d83-a9dc-fe300450e091 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
Mar 28 12:28:14.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1723" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":283,"completed":108,"skipped":1683,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-8d57e8e4-6ad0-43c5-8833-776c094793ec
STEP: Creating a pod to test consume secrets
Mar 28 12:28:15.082: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4dc70f3e-60e3-4779-9985-a60178752580" in namespace "projected-2386" to be "Succeeded or Failed"
Mar 28 12:28:15.112: INFO: Pod "pod-projected-secrets-4dc70f3e-60e3-4779-9985-a60178752580": Phase="Pending", Reason="", readiness=false. Elapsed: 29.699643ms
Mar 28 12:28:17.144: INFO: Pod "pod-projected-secrets-4dc70f3e-60e3-4779-9985-a60178752580": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061406735s
STEP: Saw pod success
Mar 28 12:28:17.144: INFO: Pod "pod-projected-secrets-4dc70f3e-60e3-4779-9985-a60178752580" satisfied condition "Succeeded or Failed"
Mar 28 12:28:17.174: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-projected-secrets-4dc70f3e-60e3-4779-9985-a60178752580 container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 28 12:28:17.253: INFO: Waiting for pod pod-projected-secrets-4dc70f3e-60e3-4779-9985-a60178752580 to disappear
Mar 28 12:28:17.284: INFO: Pod pod-projected-secrets-4dc70f3e-60e3-4779-9985-a60178752580 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 28 12:28:17.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2386" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":109,"skipped":1688,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 16 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 28 12:28:30.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6257" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":283,"completed":110,"skipped":1694,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 28 12:28:37.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0328 12:28:37.357173   24783 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
STEP: Destroying namespace "gc-929" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":283,"completed":111,"skipped":1721,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 28 12:28:53.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1729" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":283,"completed":112,"skipped":1763,"failed":0}
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 17 lines ...
Mar 28 12:29:02.421: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 28 12:29:02.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-515" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":283,"completed":113,"skipped":1764,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 20 lines ...
Mar 28 12:29:04.748: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Mar 28 12:29:04.748: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe pod agnhost-master-q5gtf --namespace=kubectl-2985'
Mar 28 12:29:05.013: INFO: stderr: ""
Mar 28 12:29:05.013: INFO: stdout: "Name:         agnhost-master-q5gtf\nNamespace:    kubectl-2985\nPriority:     0\nNode:         test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal/10.150.0.4\nStart Time:   Sat, 28 Mar 2020 12:29:03 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  cni.projectcalico.org/podIP: 192.168.154.166/32\nStatus:       Running\nIP:           192.168.154.166\nIPs:\n  IP:           192.168.154.166\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://e05e1916f42fa859de50ab18ff08003aeca86eb4aa2aacbb27d9d68bfee4669b\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 28 Mar 2020 12:29:04 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-txhfj (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-txhfj:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-txhfj\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                                                            Message\n  ----    ------     ----       ----                                                            -------\n  Normal  Scheduled  <unknown>  default-scheduler                                               Successfully assigned kubectl-2985/agnhost-master-q5gtf to test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal\n  Normal  Pulled     2s         kubelet, test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n  Normal  Created    1s         kubelet, test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal  Created container agnhost-master\n  Normal  Started    1s         kubelet, test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal  Started container agnhost-master\n"
Mar 28 12:29:05.013: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe rc agnhost-master --namespace=kubectl-2985'
Mar 28 12:29:05.308: INFO: stderr: ""
Mar 28 12:29:05.308: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-2985\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  2s    replication-controller  Created pod: agnhost-master-q5gtf\n"
Mar 28 12:29:05.308: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe service agnhost-master --namespace=kubectl-2985'
Mar 28 12:29:05.592: INFO: stderr: ""
Mar 28 12:29:05.593: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-2985\nLabels:            app=agnhost\n                   role=master\nAnnotations:       <none>\nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.97.79.49\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         192.168.154.166:6379\nSession Affinity:  None\nEvents:            <none>\n"
Mar 28 12:29:05.649: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe node test1-controlplane-0.c.k8s-boskos-gce-project-02.internal'
Mar 28 12:29:06.019: INFO: stderr: ""
Mar 28 12:29:06.020: INFO: stdout: "Name:               test1-controlplane-0.c.k8s-boskos-gce-project-02.internal\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=n1-standard-2\n                    beta.kubernetes.io/os=linux\n                    failure-domain.beta.kubernetes.io/region=us-east4\n                    failure-domain.beta.kubernetes.io/zone=us-east4-a\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=test1-controlplane-0.c.k8s-boskos-gce-project-02.internal\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    projectcalico.org/IPv4Address: 10.150.0.2/32\n                    projectcalico.org/IPv4IPIPTunnelAddr: 192.168.249.192\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 28 Mar 2020 11:49:23 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  test1-controlplane-0.c.k8s-boskos-gce-project-02.internal\n  AcquireTime:     <unset>\n  RenewTime:       Sat, 28 Mar 2020 12:29:04 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 28 Mar 2020 11:50:19 +0000   Sat, 28 Mar 2020 11:50:19 +0000   CalicoIsUp                   Calico is running on this node\n  MemoryPressure       False   Sat, 28 Mar 2020 12:28:06 +0000   Sat, 28 Mar 2020 11:49:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sat, 28 Mar 2020 12:28:06 +0000   Sat, 28 Mar 2020 11:49:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sat, 28 Mar 2020 12:28:06 +0000   Sat, 28 Mar 2020 11:49:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sat, 28 Mar 2020 12:28:06 +0000   Sat, 28 Mar 2020 11:49:43 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   10.150.0.2\n  ExternalIP:   \n  InternalDNS:  test1-controlplane-0.c.k8s-boskos-gce-project-02.internal\n  Hostname:     test1-controlplane-0.c.k8s-boskos-gce-project-02.internal\nCapacity:\n  attachable-volumes-gce-pd:  127\n  cpu:                        2\n  ephemeral-storage:          30308240Ki\n  hugepages-1Gi:              0\n  hugepages-2Mi:              0\n  memory:                     7648892Ki\n  pods:                       110\nAllocatable:\n  attachable-volumes-gce-pd:  127\n  cpu:                        2\n  ephemeral-storage:          27932073938\n  hugepages-1Gi:              0\n  hugepages-2Mi:              0\n  memory:                     7546492Ki\n  pods:                       110\nSystem Info:\n  Machine ID:                 ab1fbb51b48013174018f7988ee583f8\n  System UUID:                ab1fbb51-b480-1317-4018-f7988ee583f8\n  Boot ID:                    9ada2602-240d-42fb-9927-61c895093796\n  Kernel Version:             5.0.0-1033-gcp\n  OS Image:                   Ubuntu 18.04.4 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3\n  Kubelet Version:            v1.16.2\n  Kube-Proxy Version:         v1.16.2\nProviderID:                   gce://k8s-boskos-gce-project-02/us-east4-a/test1-controlplane-0\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                                                                 ------------  ----------  ---------------  -------------  ---\n  kube-system                 calico-kube-controllers-564b6667d7-xq5j8                                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39m\n  kube-system                 calico-node-ws8cg                                                                    250m (12%)    0 (0%)      0 (0%)           0 (0%)         39m\n  kube-system                 coredns-5644d7b6d9-6br7l                                                             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     39m\n  kube-system                 coredns-5644d7b6d9-v5pvj                                                             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     39m\n  kube-system                 etcd-test1-controlplane-0.c.k8s-boskos-gce-project-02.internal                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         38m\n  kube-system                 kube-apiserver-test1-controlplane-0.c.k8s-boskos-gce-project-02.internal             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39m\n  kube-system                 kube-controller-manager-test1-controlplane-0.c.k8s-boskos-gce-project-02.internal    200m (10%)    0 (0%)      0 (0%)           0 (0%)         38m\n  kube-system                 kube-proxy-k2zts                                                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         39m\n  kube-system                 kube-scheduler-test1-controlplane-0.c.k8s-boskos-gce-project-02.internal             100m (5%)     0 (0%)      0 (0%)           0 (0%)         38m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                   Requests    Limits\n  --------                   --------    ------\n  cpu                        1 (50%)     0 (0%)\n  memory                     140Mi (1%)  340Mi (4%)\n  ephemeral-storage          0 (0%)      0 (0%)\n  hugepages-1Gi              0 (0%)      0 (0%)\n  hugepages-2Mi              0 (0%)      0 (0%)\n  attachable-volumes-gce-pd  0           0\nEvents:\n  Type     Reason                   Age                From                                                                   Message\n  ----     ------                   ----               ----                                                                   -------\n  Normal   Starting                 41m                kubelet, test1-controlplane-0.c.k8s-boskos-gce-project-02.internal     Starting kubelet.\n  Warning  InvalidDiskCapacity      41m                kubelet, test1-controlplane-0.c.k8s-boskos-gce-project-02.internal     invalid capacity 0 on image filesystem\n  Normal   NodeAllocatableEnforced  41m                kubelet, test1-controlplane-0.c.k8s-boskos-gce-project-02.internal     Updated Node Allocatable limit across pods\n  Normal   NodeHasSufficientMemory  41m (x7 over 41m)  kubelet, test1-controlplane-0.c.k8s-boskos-gce-project-02.internal     Node test1-controlplane-0.c.k8s-boskos-gce-project-02.internal status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure    41m (x8 over 41m)  kubelet, test1-controlplane-0.c.k8s-boskos-gce-project-02.internal     Node test1-controlplane-0.c.k8s-boskos-gce-project-02.internal status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID     41m (x7 over 41m)  kubelet, test1-controlplane-0.c.k8s-boskos-gce-project-02.internal     Node test1-controlplane-0.c.k8s-boskos-gce-project-02.internal status is now: NodeHasSufficientPID\n  Normal   Starting                 39m                kube-proxy, test1-controlplane-0.c.k8s-boskos-gce-project-02.internal  Starting kube-proxy.\n"
Mar 28 12:29:06.020: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe namespace kubectl-2985'
Mar 28 12:29:06.306: INFO: stderr: ""
Mar 28 12:29:06.306: INFO: stdout: "Name:         kubectl-2985\nLabels:       e2e-framework=kubectl\n              e2e-run=de236f45-13b8-45c2-8a2c-bf9c62a16cd9\nAnnotations:  <none>\nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 28 12:29:06.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2985" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":283,"completed":114,"skipped":1811,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 28 12:29:17.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5565" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":283,"completed":115,"skipped":1834,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 26 lines ...
Mar 28 12:29:20.636: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 28 12:29:20.636: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 28 12:29:20.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5319" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":283,"completed":116,"skipped":1847,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 28 12:29:20.892: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5727645f-e168-4ef2-92d8-7188fd1bef24" in namespace "downward-api-4884" to be "Succeeded or Failed"
Mar 28 12:29:20.923: INFO: Pod "downwardapi-volume-5727645f-e168-4ef2-92d8-7188fd1bef24": Phase="Pending", Reason="", readiness=false. Elapsed: 31.310962ms
Mar 28 12:29:22.956: INFO: Pod "downwardapi-volume-5727645f-e168-4ef2-92d8-7188fd1bef24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064392009s
STEP: Saw pod success
Mar 28 12:29:22.956: INFO: Pod "downwardapi-volume-5727645f-e168-4ef2-92d8-7188fd1bef24" satisfied condition "Succeeded or Failed"
Mar 28 12:29:22.990: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod downwardapi-volume-5727645f-e168-4ef2-92d8-7188fd1bef24 container client-container: <nil>
STEP: delete the pod
Mar 28 12:29:23.082: INFO: Waiting for pod downwardapi-volume-5727645f-e168-4ef2-92d8-7188fd1bef24 to disappear
Mar 28 12:29:23.113: INFO: Pod downwardapi-volume-5727645f-e168-4ef2-92d8-7188fd1bef24 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 28 12:29:23.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4884" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":283,"completed":117,"skipped":1855,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a volume subpath [sig-storage] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Mar 28 12:29:23.206: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [sig-storage] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in volume subpath
Mar 28 12:29:23.367: INFO: Waiting up to 5m0s for pod "var-expansion-58fd460d-6bdd-48f9-9cf9-7a7425443368" in namespace "var-expansion-6697" to be "Succeeded or Failed"
Mar 28 12:29:23.397: INFO: Pod "var-expansion-58fd460d-6bdd-48f9-9cf9-7a7425443368": Phase="Pending", Reason="", readiness=false. Elapsed: 30.293673ms
Mar 28 12:29:25.429: INFO: Pod "var-expansion-58fd460d-6bdd-48f9-9cf9-7a7425443368": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061807251s
STEP: Saw pod success
Mar 28 12:29:25.429: INFO: Pod "var-expansion-58fd460d-6bdd-48f9-9cf9-7a7425443368" satisfied condition "Succeeded or Failed"
Mar 28 12:29:25.459: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod var-expansion-58fd460d-6bdd-48f9-9cf9-7a7425443368 container dapi-container: <nil>
STEP: delete the pod
Mar 28 12:29:25.538: INFO: Waiting for pod var-expansion-58fd460d-6bdd-48f9-9cf9-7a7425443368 to disappear
Mar 28 12:29:25.571: INFO: Pod var-expansion-58fd460d-6bdd-48f9-9cf9-7a7425443368 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 28 12:29:25.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6697" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":283,"completed":118,"skipped":1880,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 28 12:29:25.836: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dea7ea84-bd03-4116-b173-003ce7c23d5a" in namespace "projected-7905" to be "Succeeded or Failed"
Mar 28 12:29:25.870: INFO: Pod "downwardapi-volume-dea7ea84-bd03-4116-b173-003ce7c23d5a": Phase="Pending", Reason="", readiness=false. Elapsed: 34.477642ms
Mar 28 12:29:27.902: INFO: Pod "downwardapi-volume-dea7ea84-bd03-4116-b173-003ce7c23d5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065646684s
STEP: Saw pod success
Mar 28 12:29:27.902: INFO: Pod "downwardapi-volume-dea7ea84-bd03-4116-b173-003ce7c23d5a" satisfied condition "Succeeded or Failed"
Mar 28 12:29:27.933: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod downwardapi-volume-dea7ea84-bd03-4116-b173-003ce7c23d5a container client-container: <nil>
STEP: delete the pod
Mar 28 12:29:28.012: INFO: Waiting for pod downwardapi-volume-dea7ea84-bd03-4116-b173-003ce7c23d5a to disappear
Mar 28 12:29:28.094: INFO: Pod downwardapi-volume-dea7ea84-bd03-4116-b173-003ce7c23d5a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 28 12:29:28.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7905" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":283,"completed":119,"skipped":1904,"failed":0}

------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-df795f67-ba7c-4d27-8a5c-3e2f863f8c64
STEP: Creating a pod to test consume configMaps
Mar 28 12:29:28.405: INFO: Waiting up to 5m0s for pod "pod-configmaps-ee38872a-3c5c-4f38-958e-81158d5144ba" in namespace "configmap-8198" to be "Succeeded or Failed"
Mar 28 12:29:28.436: INFO: Pod "pod-configmaps-ee38872a-3c5c-4f38-958e-81158d5144ba": Phase="Pending", Reason="", readiness=false. Elapsed: 30.566759ms
Mar 28 12:29:30.467: INFO: Pod "pod-configmaps-ee38872a-3c5c-4f38-958e-81158d5144ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061258067s
STEP: Saw pod success
Mar 28 12:29:30.467: INFO: Pod "pod-configmaps-ee38872a-3c5c-4f38-958e-81158d5144ba" satisfied condition "Succeeded or Failed"
Mar 28 12:29:30.497: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-configmaps-ee38872a-3c5c-4f38-958e-81158d5144ba container configmap-volume-test: <nil>
STEP: delete the pod
Mar 28 12:29:30.583: INFO: Waiting for pod pod-configmaps-ee38872a-3c5c-4f38-958e-81158d5144ba to disappear
Mar 28 12:29:30.615: INFO: Pod pod-configmaps-ee38872a-3c5c-4f38-958e-81158d5144ba no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 28 12:29:30.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8198" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":120,"skipped":1904,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
... skipping 11 lines ...
Mar 28 12:29:33.057: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:175
Mar 28 12:29:33.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6246" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":283,"completed":121,"skipped":1919,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Aggregator
... skipping 15 lines ...
Mar 28 12:29:42.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720995374, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720995374, loc:(*time.Location)(0x7b56f20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720995374, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720995374, loc:(*time.Location)(0x7b56f20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-54b47bf96b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 28 12:29:44.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720995374, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720995374, loc:(*time.Location)(0x7b56f20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720995374, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720995374, loc:(*time.Location)(0x7b56f20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-54b47bf96b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 28 12:29:46.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720995374, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720995374, loc:(*time.Location)(0x7b56f20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720995374, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720995374, loc:(*time.Location)(0x7b56f20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-54b47bf96b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 28 12:30:48.717: INFO: Waited 1m0.245980347s for the sample-apiserver to be ready to handle requests.
Mar 28 12:30:48.717: INFO: current APIService: {"metadata":{"name":"v1alpha1.wardle.example.com","selfLink":"/apis/apiregistration.k8s.io/v1/apiservices/v1alpha1.wardle.example.com","uid":"30d03529-2ff5-4cde-889d-900671449284","resourceVersion":"12964","creationTimestamp":"2020-03-28T12:29:48Z"},"spec":{"service":{"namespace":"aggregator-9248","name":"sample-api","port":7443},"group":"wardle.example.com","version":"v1alpha1","caBundle":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyRENDQWNDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFkTVJzd0dRWURWUVFERXhKbE1tVXQKYzJWeWRtVnlMV05sY25RdFkyRXdIaGNOTWpBd016STRNVEl5T1RNeldoY05NekF3TXpJMk1USXlPVE16V2pBZApNUnN3R1FZRFZRUURFeEpsTW1VdGMyVnlkbVZ5TFdObGNuUXRZMkV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUUNuektneGRXajNxekdzQTdqaTFpemQwVVMrdGtpaFh3Ujk0WlRja2Q5OFlTTDcKYTJENzFUSjY4YWljM0FzNE5VV252OXlVbmw0UmI1bHI1R2xPNXFONDdkL2d1Y1E1d1dYd0RFN2R5NUR5RXlIZQo1TWpzTkVZNW1UMjl5cGNvMmpnWHg0WnZRZHFCYTVrcVJnd2IxQW1GaVhITUU1cUhkY0JQRm5GMEhiRnlYcGVoClhkbVgrWGNUcWdxUEQySFZNcURuVi9VUHRCelp3YzVkRFNPa1JPQ2pXWjVNakJWcCt1eTBHN0g1ZW81bDY3VksKWTJRTDZ3MENSeTJFV3RPY3NnU0VvNVFDQjJEWmhLY2lEOHF6bTFhT05MSnFlUnMvRzNRQVJJUmp2OEE5Z1R4NwpJQURnVTRMWTZueHMrdk9BK0wwODg1RmZJL1NkRXpjOGhiR054VmdCQWdNQkFBR2pJekFoTUE0R0ExVWREd0VCCi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFDbEF5MUEKUGtLK3Q3bVYrYXlkdktXek03d1Q1NThrdFhQNW5OZFJTK2tmUTlkOWtCelhXME9ZdFhMdEhEdnY2Rk90Qmd6WQpNeXR2dnk1a0grUlhvN1VqSUZFbFlRQmkyT2ZlWG5qekdmWS9RdFNuYmNoZWErdkZUOXhSTkY5aCtubGtLWnplCm83d2plelFlQ2xIWG5WQmNKVWlUV29JMHJVYXl6TFpxV1hmWDlINVNaOFdaODc1SXV5blgyLzBwL1dFZDRoUUsKZkdvcDg5ZGJBMlZHZ3VyV0d4RTR2aFZ5TzhHY3Z2ZmZzQ04reHI4WEVZcHFjT2VLc2oxa3BEYWZVSXlmZkFZTAovcjJ6b25SbEdFT3NVcThKV1hIZk5ObzNQRXJyYlFJbDB1Rzc0bTNCNUhHbkd0eUZKNlE0RzdQa3R5a3lDdU94Ci9CNHVRazBMSXRJbWcybnQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=","groupPriorityMinimum":2000,"versionPriority":200},"status":{"conditions":[{"type":"Available","status":"False","lastTransitionTime":"2020-03-28T12:29:48Z","reason":"FailedDiscoveryCheck","message":"failing or missing response from https://10.111.2.12:7443/apis/wardle.example.com/v1alpha1: bad status from https://10.111.2.12:7443/apis/wardle.example.com/v1alpha1: 403"}]}}
Mar 28 12:30:48.719: INFO: current pods: {"metadata":{"selfLink":"/api/v1/namespaces/aggregator-9248/pods","resourceVersion":"13079"},"items":[{"metadata":{"name":"sample-apiserver-deployment-54b47bf96b-6kwmb","generateName":"sample-apiserver-deployment-54b47bf96b-","namespace":"aggregator-9248","selfLink":"/api/v1/namespaces/aggregator-9248/pods/sample-apiserver-deployment-54b47bf96b-6kwmb","uid":"2ae69d0b-9ae0-4339-a2e7-446fec13b986","resourceVersion":"12956","creationTimestamp":"2020-03-28T12:29:34Z","labels":{"apiserver":"true","app":"sample-apiserver","pod-template-hash":"54b47bf96b"},"annotations":{"cni.projectcalico.org/podIP":"192.168.154.172/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"sample-apiserver-deployment-54b47bf96b","uid":"8feb8128-fdb3-448c-bb4b-b77b83534229","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"apiserver-certs","secret":{"secretName":"sample-apiserver-secret","defaultMode":420}},{"name":"default-token-mgzbs","secret":{"secretName":"default-token-mgzbs","defaultMode":420}}],"containers":[{"name":"sample-apiserver","image":"gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17","args":["--etcd-servers=http://127.0.0.1:2379","--tls-cert-file=/apiserver.local.config/certificates/tls.crt","--tls-private-key-file=/apiserver.local.config/certificates/tls.key","--audit-log-path=-","--audit-log-maxage=0","--audit-log-maxbackup=0"],"resources":{},"volumeMounts":[{"name":"apiserver-certs","readOnly":true,"mountPath":"/apiserver.local.config/certificates"},{"name":"default-token-mgzbs","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"},{"name":"etcd","image":"k8s.gcr.io/etcd:3.4.4","command":["/usr/local/bin/etcd","--listen-client-urls","http://127.0.0.1:2379","--advertise-client-urls","http://127.0.0.1:2379"],"resources":{},"volumeMounts":[{"name":"default-token-mgzbs","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-03-28T12:29:34Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-03-28T12:29:46Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-03-28T12:29:46Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-03-28T12:29:34Z"}],"hostIP":"10.150.0.4","podIP":"192.168.154.172","podIPs":[{"ip":"192.168.154.172"}],"startTime":"2020-03-28T12:29:34Z","containerStatuses":[{"name":"etcd","state":{"running":{"startedAt":"2020-03-28T12:29:46Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/etcd:3.4.4","imageID":"k8s.gcr.io/etcd@sha256:e10ee22e7b56d08b7cb7da2a390863c445d66a7284294cee8c9decbfb3ba4359","containerID":"containerd://e300aa77ac27281383f8c5042a7c52eb460a07cbdb1bcb6bc56ce785b97ca5c9","started":true},{"name":"sample-apiserver","state":{"running":{"startedAt":"2020-03-28T12:29:37Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17","imageID":"gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55","containerID":"containerd://9da3c9a34f84f91973e61186f8fd5f2bcfaf6c39e4288e35a62676ac93361440","started":true}],"qosClass":"BestEffort"}}]}
Mar 28 12:30:48.808: INFO: logs of sample-apiserver-deployment-54b47bf96b-6kwmb/sample-apiserver (error: <nil>): W0328 12:29:37.853057       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0328 12:29:37.853213       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
I0328 12:29:37.881141       1 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook,BanFlunder.
I0328 12:29:37.881163       1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
I0328 12:29:37.888536       1 client.go:361] parsed scheme: "endpoint"
I0328 12:29:37.888578       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0328 12:29:37.889109       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0328 12:29:38.080114       1 client.go:361] parsed scheme: "endpoint"
I0328 12:29:38.080213       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0328 12:29:38.080530       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0328 12:29:38.889651       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0328 12:29:39.081113       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0328 12:29:40.538263       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0328 12:29:40.602083       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0328 12:29:42.894442       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0328 12:29:42.921980       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0328 12:29:47.137373       1 client.go:361] parsed scheme: "endpoint"
I0328 12:29:47.137418       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0328 12:29:47.138960       1 client.go:361] parsed scheme: "endpoint"
I0328 12:29:47.139143       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0328 12:29:47.143774       1 client.go:361] parsed scheme: "endpoint"
I0328 12:29:47.143816       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0328 12:29:47.199988       1 secure_serving.go:178] Serving securely on [::]:443
I0328 12:29:47.202739       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0328 12:29:47.202956       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0328 12:29:47.203144       1 dynamic_serving_content.go:129] Starting serving-cert::/apiserver.local.config/certificates/tls.crt::/apiserver.local.config/certificates/tls.key
I0328 12:29:47.203293       1 tlsconfig.go:219] Starting DynamicServingCertificateController
I0328 12:29:47.204328       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0328 12:29:47.204510       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0328 12:29:47.209751       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:47.210560       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:48.211861       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:48.213371       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:49.213444       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:49.216217       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:50.215376       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:50.217670       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:51.217201       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:51.218904       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:52.218851       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:52.220268       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:53.220610       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:53.221488       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:54.222908       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:54.224719       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:55.224571       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:55.225895       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:56.226623       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:56.228293       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:57.228452       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:57.229947       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:58.230988       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:58.231720       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:59.233136       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:29:59.233663       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:00.235043       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:00.236331       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:01.237115       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:01.237956       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:02.239123       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:02.241409       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:03.240922       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:03.243841       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:04.242896       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:04.246224       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:05.244542       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:05.247745       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:06.247258       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:06.250424       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:07.249087       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:07.251885       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:08.251222       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:08.253216       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:09.252909       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:09.256225       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:10.254659       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:10.257645       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:11.256416       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:11.260059       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:12.258207       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:12.261501       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:13.259949       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:13.262838       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:14.261663       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:14.264128       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:15.263646       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:15.265612       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:16.265657       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:16.267353       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:17.267427       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:17.268679       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:18.271995       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:18.272207       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:19.273796       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:19.277767       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:20.275602       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:20.279348       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:21.277247       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:21.280827       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:22.278877       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:22.282221       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:23.280593       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:23.283467       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:24.282333       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:24.284804       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:25.285362       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:25.287143       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:26.286918       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:26.290121       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:27.288792       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:27.291803       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:28.291031       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:28.293349       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:29.292865       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:29.295870       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:30.294861       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:30.297286       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:31.296658       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:31.299847       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:32.298655       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:32.301382       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:33.300409       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:33.304374       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:34.302000       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:34.305874       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:35.303871       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:35.307253       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:36.305742       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:36.308695       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:37.307422       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:37.310043       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:38.309749       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:38.311461       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:39.311616       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:39.314189       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:40.313410       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:40.315557       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:41.315163       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:41.316875       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:42.316954       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:42.319234       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:43.318892       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:43.320862       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:44.320600       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:44.322092       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:45.322715       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:45.324074       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:46.324693       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:46.326816       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:47.326466       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:47.329310       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:48.328469       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 12:30:48.330729       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-9248:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"

Mar 28 12:30:48.844: INFO: logs of sample-apiserver-deployment-54b47bf96b-6kwmb/etcd (error: <nil>): [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-03-28 12:29:46.707605 I | etcdmain: etcd Version: 3.4.4
2020-03-28 12:29:46.707646 I | etcdmain: Git SHA: c65a9e2dd
2020-03-28 12:29:46.707654 I | etcdmain: Go Version: go1.12.12
2020-03-28 12:29:46.707693 I | etcdmain: Go OS/Arch: linux/amd64
2020-03-28 12:29:46.707700 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2020-03-28 12:29:46.707835 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
... skipping 26 lines ...
2020-03-28 12:29:47.017950 N | etcdserver/membership: set the initial cluster version to 3.4
2020-03-28 12:29:47.018176 I | etcdserver/api: enabled capabilities for version 3.4
2020-03-28 12:29:47.018464 I | etcdserver: published {Name:default ClientURLs:[http://127.0.0.1:2379]} to cluster cdf818194e3a8c32
2020-03-28 12:29:47.018562 I | embed: ready to serve client requests
2020-03-28 12:29:47.019567 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!

Mar 28 12:30:48.844: FAIL: gave up waiting for apiservice wardle to come up successfully
Unexpected error:
    <*errors.errorString | 0xc0000cdff0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 153 lines ...
[sig-api-machinery] Aggregator
test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] [It]
  test/e2e/framework/framework.go:597

  Mar 28 12:30:48.844: gave up waiting for apiservice wardle to come up successfully
  Unexpected error:
      <*errors.errorString | 0xc0000cdff0>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  test/e2e/apimachinery/aggregator.go:401
------------------------------
{"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":283,"completed":121,"skipped":1928,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 22 lines ...
Mar 28 12:31:01.745: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-5216 /api/v1/namespaces/watch-5216/configmaps/e2e-watch-test-label-changed f9fe5d8e-d932-496c-9da7-43ad1efb2be7 13154 0 2020-03-28 12:30:51 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 28 12:31:01.745: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-5216 /api/v1/namespaces/watch-5216/configmaps/e2e-watch-test-label-changed f9fe5d8e-d932-496c-9da7-43ad1efb2be7 13155 0 2020-03-28 12:30:51 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 28 12:31:01.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5216" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":283,"completed":122,"skipped":1931,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
... skipping 7 lines ...
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  test/e2e/framework/framework.go:175
Mar 28 12:31:02.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-8593" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":283,"completed":123,"skipped":1947,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 9 lines ...
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  test/e2e/framework/framework.go:175
Mar 28 12:31:02.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2406" for this suite.
STEP: Destroying namespace "nspatchtest-d8606190-5d3d-4b72-95eb-f06345bdbf89-7672" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":283,"completed":124,"skipped":1961,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 10 lines ...
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Mar 28 12:31:02.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8835" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":283,"completed":125,"skipped":1966,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 47 lines ...
Mar 28 12:31:07.010: INFO: stderr: ""
Mar 28 12:31:07.010: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 28 12:31:07.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2178" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":283,"completed":126,"skipped":1969,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 145 lines ...
Mar 28 12:31:33.029: INFO: stderr: ""
Mar 28 12:31:33.029: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 28 12:31:33.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5095" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":283,"completed":127,"skipped":1971,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 28 12:31:33.127: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 28 12:31:33.300: INFO: Waiting up to 5m0s for pod "pod-cfc3deb0-dbfe-4f56-b7fa-f559f3fc008b" in namespace "emptydir-6010" to be "Succeeded or Failed"
Mar 28 12:31:33.330: INFO: Pod "pod-cfc3deb0-dbfe-4f56-b7fa-f559f3fc008b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.625885ms
Mar 28 12:31:35.361: INFO: Pod "pod-cfc3deb0-dbfe-4f56-b7fa-f559f3fc008b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060963925s
STEP: Saw pod success
Mar 28 12:31:35.361: INFO: Pod "pod-cfc3deb0-dbfe-4f56-b7fa-f559f3fc008b" satisfied condition "Succeeded or Failed"
Mar 28 12:31:35.391: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-cfc3deb0-dbfe-4f56-b7fa-f559f3fc008b container test-container: <nil>
STEP: delete the pod
Mar 28 12:31:35.481: INFO: Waiting for pod pod-cfc3deb0-dbfe-4f56-b7fa-f559f3fc008b to disappear
Mar 28 12:31:35.512: INFO: Pod pod-cfc3deb0-dbfe-4f56-b7fa-f559f3fc008b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 28 12:31:35.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6010" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":128,"skipped":2000,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Mar 28 12:31:35.729: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 28 12:31:39.032: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 28 12:31:52.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-24" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":283,"completed":129,"skipped":2009,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
... skipping 18 lines ...
STEP: Deleting second CR
Mar 28 12:32:43.157: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-28T12:32:02Z generation:2 name:name2 resourceVersion:13611 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:7031fc57-fca1-4ea1-b591-384f298a19e5] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 28 12:32:53.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-4019" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":283,"completed":130,"skipped":2073,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 60 lines ...
Mar 28 12:33:01.081: INFO: stderr: ""
Mar 28 12:33:01.081: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 28 12:33:01.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1303" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":283,"completed":131,"skipped":2076,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-d33916f8-c2e7-4010-9eaa-fc607e899f86
STEP: Creating a pod to test consume secrets
Mar 28 12:33:01.380: INFO: Waiting up to 5m0s for pod "pod-secrets-bf902557-7018-445a-b565-7f3404e01bff" in namespace "secrets-7003" to be "Succeeded or Failed"
Mar 28 12:33:01.409: INFO: Pod "pod-secrets-bf902557-7018-445a-b565-7f3404e01bff": Phase="Pending", Reason="", readiness=false. Elapsed: 29.630749ms
Mar 28 12:33:03.440: INFO: Pod "pod-secrets-bf902557-7018-445a-b565-7f3404e01bff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060126651s
STEP: Saw pod success
Mar 28 12:33:03.440: INFO: Pod "pod-secrets-bf902557-7018-445a-b565-7f3404e01bff" satisfied condition "Succeeded or Failed"
Mar 28 12:33:03.470: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-secrets-bf902557-7018-445a-b565-7f3404e01bff container secret-env-test: <nil>
STEP: delete the pod
Mar 28 12:33:03.548: INFO: Waiting for pod pod-secrets-bf902557-7018-445a-b565-7f3404e01bff to disappear
Mar 28 12:33:03.579: INFO: Pod pod-secrets-bf902557-7018-445a-b565-7f3404e01bff no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Mar 28 12:33:03.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7003" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":283,"completed":132,"skipped":2087,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Mar 28 12:33:09.148: INFO: stderr: ""
Mar 28 12:33:09.148: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4730-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t<string>\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t<string>\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t<Object>\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t<map[string]>\n     Specification of Waldo\n\n   status\t<Object>\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 28 12:33:11.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-276" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":283,"completed":133,"skipped":2097,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should print the output to logs [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 28 12:33:14.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9264" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":283,"completed":134,"skipped":2115,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 28 12:33:16.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6929" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":135,"skipped":2119,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 12 lines ...
Mar 28 12:33:19.534: INFO: Terminating Job.batch foo pods took: 300.274825ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Mar 28 12:34:01.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7464" for this suite.
•{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":283,"completed":136,"skipped":2119,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 11 lines ...
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 28 12:34:21.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4668" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":283,"completed":137,"skipped":2148,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
Mar 28 12:34:27.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6463" for this suite.
STEP: Destroying namespace "webhook-6463-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":283,"completed":138,"skipped":2169,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 28 12:34:29.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9645" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":139,"skipped":2185,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Mar 28 12:34:30.088: INFO: stderr: ""
Mar 28 12:34:30.088: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncrd.projectcalico.org/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 28 12:34:30.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1225" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":283,"completed":140,"skipped":2213,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 28 12:34:37.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-1031" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":283,"completed":141,"skipped":2216,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 28 12:34:40.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7374" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":283,"completed":142,"skipped":2221,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 34 lines ...
Mar 28 12:36:54.177: INFO: Deleting pod "var-expansion-bf325c86-7d1f-41ec-a55e-dd61bd570d80" in namespace "var-expansion-71"
Mar 28 12:36:54.212: INFO: Wait up to 5m0s for pod "var-expansion-bf325c86-7d1f-41ec-a55e-dd61bd570d80" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 28 12:37:32.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-71" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":283,"completed":143,"skipped":2252,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Mar 28 12:37:32.367: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override arguments
Mar 28 12:37:32.535: INFO: Waiting up to 5m0s for pod "client-containers-4ab90f3f-f749-4cd0-bd51-353b7772293f" in namespace "containers-8828" to be "Succeeded or Failed"
Mar 28 12:37:32.566: INFO: Pod "client-containers-4ab90f3f-f749-4cd0-bd51-353b7772293f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.98433ms
Mar 28 12:37:34.596: INFO: Pod "client-containers-4ab90f3f-f749-4cd0-bd51-353b7772293f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061275802s
STEP: Saw pod success
Mar 28 12:37:34.596: INFO: Pod "client-containers-4ab90f3f-f749-4cd0-bd51-353b7772293f" satisfied condition "Succeeded or Failed"
Mar 28 12:37:34.629: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod client-containers-4ab90f3f-f749-4cd0-bd51-353b7772293f container test-container: <nil>
STEP: delete the pod
Mar 28 12:37:34.723: INFO: Waiting for pod client-containers-4ab90f3f-f749-4cd0-bd51-353b7772293f to disappear
Mar 28 12:37:34.756: INFO: Pod client-containers-4ab90f3f-f749-4cd0-bd51-353b7772293f no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 28 12:37:34.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8828" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":283,"completed":144,"skipped":2261,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 12 lines ...
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 28 12:37:35.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1762" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":283,"completed":145,"skipped":2265,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Mar 28 12:37:35.254: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 28 12:37:38.052: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 28 12:37:51.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9263" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":283,"completed":146,"skipped":2283,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 28 12:37:51.499: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 28 12:37:51.670: INFO: Waiting up to 5m0s for pod "pod-d7dd0b62-e555-4aa0-be5f-62f35f4a0c78" in namespace "emptydir-9875" to be "Succeeded or Failed"
Mar 28 12:37:51.712: INFO: Pod "pod-d7dd0b62-e555-4aa0-be5f-62f35f4a0c78": Phase="Pending", Reason="", readiness=false. Elapsed: 41.228549ms
Mar 28 12:37:53.758: INFO: Pod "pod-d7dd0b62-e555-4aa0-be5f-62f35f4a0c78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.087172898s
STEP: Saw pod success
Mar 28 12:37:53.758: INFO: Pod "pod-d7dd0b62-e555-4aa0-be5f-62f35f4a0c78" satisfied condition "Succeeded or Failed"
Mar 28 12:37:53.788: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-d7dd0b62-e555-4aa0-be5f-62f35f4a0c78 container test-container: <nil>
STEP: delete the pod
Mar 28 12:37:53.885: INFO: Waiting for pod pod-d7dd0b62-e555-4aa0-be5f-62f35f4a0c78 to disappear
Mar 28 12:37:53.916: INFO: Pod pod-d7dd0b62-e555-4aa0-be5f-62f35f4a0c78 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 28 12:37:53.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9875" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":147,"skipped":2284,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-jgqn
STEP: Creating a pod to test atomic-volume-subpath
Mar 28 12:37:54.244: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jgqn" in namespace "subpath-2045" to be "Succeeded or Failed"
Mar 28 12:37:54.276: INFO: Pod "pod-subpath-test-configmap-jgqn": Phase="Pending", Reason="", readiness=false. Elapsed: 32.123569ms
Mar 28 12:37:56.307: INFO: Pod "pod-subpath-test-configmap-jgqn": Phase="Running", Reason="", readiness=true. Elapsed: 2.063591853s
Mar 28 12:37:58.338: INFO: Pod "pod-subpath-test-configmap-jgqn": Phase="Running", Reason="", readiness=true. Elapsed: 4.094515431s
Mar 28 12:38:00.369: INFO: Pod "pod-subpath-test-configmap-jgqn": Phase="Running", Reason="", readiness=true. Elapsed: 6.125684687s
Mar 28 12:38:02.400: INFO: Pod "pod-subpath-test-configmap-jgqn": Phase="Running", Reason="", readiness=true. Elapsed: 8.156471675s
Mar 28 12:38:04.431: INFO: Pod "pod-subpath-test-configmap-jgqn": Phase="Running", Reason="", readiness=true. Elapsed: 10.187499875s
Mar 28 12:38:06.465: INFO: Pod "pod-subpath-test-configmap-jgqn": Phase="Running", Reason="", readiness=true. Elapsed: 12.221109963s
Mar 28 12:38:08.496: INFO: Pod "pod-subpath-test-configmap-jgqn": Phase="Running", Reason="", readiness=true. Elapsed: 14.252187893s
Mar 28 12:38:10.527: INFO: Pod "pod-subpath-test-configmap-jgqn": Phase="Running", Reason="", readiness=true. Elapsed: 16.283095387s
Mar 28 12:38:12.558: INFO: Pod "pod-subpath-test-configmap-jgqn": Phase="Running", Reason="", readiness=true. Elapsed: 18.313809004s
Mar 28 12:38:14.589: INFO: Pod "pod-subpath-test-configmap-jgqn": Phase="Running", Reason="", readiness=true. Elapsed: 20.345098502s
Mar 28 12:38:16.619: INFO: Pod "pod-subpath-test-configmap-jgqn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.375529602s
STEP: Saw pod success
Mar 28 12:38:16.619: INFO: Pod "pod-subpath-test-configmap-jgqn" satisfied condition "Succeeded or Failed"
Mar 28 12:38:16.650: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-subpath-test-configmap-jgqn container test-container-subpath-configmap-jgqn: <nil>
STEP: delete the pod
Mar 28 12:38:16.735: INFO: Waiting for pod pod-subpath-test-configmap-jgqn to disappear
Mar 28 12:38:16.766: INFO: Pod pod-subpath-test-configmap-jgqn no longer exists
STEP: Deleting pod pod-subpath-test-configmap-jgqn
Mar 28 12:38:16.766: INFO: Deleting pod "pod-subpath-test-configmap-jgqn" in namespace "subpath-2045"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 28 12:38:16.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2045" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":283,"completed":148,"skipped":2292,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 28 12:38:31.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5337" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":283,"completed":149,"skipped":2315,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-downwardapi-xtpp
STEP: Creating a pod to test atomic-volume-subpath
Mar 28 12:38:31.461: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-xtpp" in namespace "subpath-7545" to be "Succeeded or Failed"
Mar 28 12:38:31.492: INFO: Pod "pod-subpath-test-downwardapi-xtpp": Phase="Pending", Reason="", readiness=false. Elapsed: 30.747202ms
Mar 28 12:38:33.523: INFO: Pod "pod-subpath-test-downwardapi-xtpp": Phase="Running", Reason="", readiness=true. Elapsed: 2.061502745s
Mar 28 12:38:35.554: INFO: Pod "pod-subpath-test-downwardapi-xtpp": Phase="Running", Reason="", readiness=true. Elapsed: 4.092447452s
Mar 28 12:38:37.584: INFO: Pod "pod-subpath-test-downwardapi-xtpp": Phase="Running", Reason="", readiness=true. Elapsed: 6.123099368s
Mar 28 12:38:39.615: INFO: Pod "pod-subpath-test-downwardapi-xtpp": Phase="Running", Reason="", readiness=true. Elapsed: 8.154047668s
Mar 28 12:38:41.646: INFO: Pod "pod-subpath-test-downwardapi-xtpp": Phase="Running", Reason="", readiness=true. Elapsed: 10.184575248s
Mar 28 12:38:43.676: INFO: Pod "pod-subpath-test-downwardapi-xtpp": Phase="Running", Reason="", readiness=true. Elapsed: 12.215401453s
Mar 28 12:38:45.707: INFO: Pod "pod-subpath-test-downwardapi-xtpp": Phase="Running", Reason="", readiness=true. Elapsed: 14.246228526s
Mar 28 12:38:47.738: INFO: Pod "pod-subpath-test-downwardapi-xtpp": Phase="Running", Reason="", readiness=true. Elapsed: 16.276835484s
Mar 28 12:38:49.771: INFO: Pod "pod-subpath-test-downwardapi-xtpp": Phase="Running", Reason="", readiness=true. Elapsed: 18.30994536s
Mar 28 12:38:51.802: INFO: Pod "pod-subpath-test-downwardapi-xtpp": Phase="Running", Reason="", readiness=true. Elapsed: 20.340430702s
Mar 28 12:38:53.832: INFO: Pod "pod-subpath-test-downwardapi-xtpp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.370745956s
STEP: Saw pod success
Mar 28 12:38:53.832: INFO: Pod "pod-subpath-test-downwardapi-xtpp" satisfied condition "Succeeded or Failed"
Mar 28 12:38:53.862: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-subpath-test-downwardapi-xtpp container test-container-subpath-downwardapi-xtpp: <nil>
STEP: delete the pod
Mar 28 12:38:53.943: INFO: Waiting for pod pod-subpath-test-downwardapi-xtpp to disappear
Mar 28 12:38:53.975: INFO: Pod pod-subpath-test-downwardapi-xtpp no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-xtpp
Mar 28 12:38:53.975: INFO: Deleting pod "pod-subpath-test-downwardapi-xtpp" in namespace "subpath-7545"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 28 12:38:54.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7545" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":283,"completed":150,"skipped":2323,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 55 lines ...
Mar 28 12:39:21.145: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-882/pods","resourceVersion":"15314"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 28 12:39:21.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-882" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":283,"completed":151,"skipped":2336,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-f9612386-0855-4076-93b8-5c5cea0854a8
STEP: Creating a pod to test consume configMaps
Mar 28 12:39:21.538: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3bca1251-8138-440c-aa80-113aaa247d41" in namespace "projected-2897" to be "Succeeded or Failed"
Mar 28 12:39:21.574: INFO: Pod "pod-projected-configmaps-3bca1251-8138-440c-aa80-113aaa247d41": Phase="Pending", Reason="", readiness=false. Elapsed: 35.604483ms
Mar 28 12:39:23.605: INFO: Pod "pod-projected-configmaps-3bca1251-8138-440c-aa80-113aaa247d41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06633104s
STEP: Saw pod success
Mar 28 12:39:23.605: INFO: Pod "pod-projected-configmaps-3bca1251-8138-440c-aa80-113aaa247d41" satisfied condition "Succeeded or Failed"
Mar 28 12:39:23.635: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-projected-configmaps-3bca1251-8138-440c-aa80-113aaa247d41 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 28 12:39:23.738: INFO: Waiting for pod pod-projected-configmaps-3bca1251-8138-440c-aa80-113aaa247d41 to disappear
Mar 28 12:39:23.769: INFO: Pod pod-projected-configmaps-3bca1251-8138-440c-aa80-113aaa247d41 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 28 12:39:23.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2897" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":283,"completed":152,"skipped":2347,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 27 lines ...
  test/e2e/framework/framework.go:175
Mar 28 12:39:30.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8240" for this suite.
STEP: Destroying namespace "webhook-8240-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":283,"completed":153,"skipped":2367,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 28 12:39:31.510: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3b2c3b39-945a-40ac-b457-b60dcd08dada" in namespace "downward-api-612" to be "Succeeded or Failed"
Mar 28 12:39:31.541: INFO: Pod "downwardapi-volume-3b2c3b39-945a-40ac-b457-b60dcd08dada": Phase="Pending", Reason="", readiness=false. Elapsed: 31.683562ms
Mar 28 12:39:33.574: INFO: Pod "downwardapi-volume-3b2c3b39-945a-40ac-b457-b60dcd08dada": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064257663s
STEP: Saw pod success
Mar 28 12:39:33.574: INFO: Pod "downwardapi-volume-3b2c3b39-945a-40ac-b457-b60dcd08dada" satisfied condition "Succeeded or Failed"
Mar 28 12:39:33.605: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod downwardapi-volume-3b2c3b39-945a-40ac-b457-b60dcd08dada container client-container: <nil>
STEP: delete the pod
Mar 28 12:39:33.685: INFO: Waiting for pod downwardapi-volume-3b2c3b39-945a-40ac-b457-b60dcd08dada to disappear
Mar 28 12:39:33.719: INFO: Pod downwardapi-volume-3b2c3b39-945a-40ac-b457-b60dcd08dada no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 28 12:39:33.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-612" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":154,"skipped":2413,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 28 12:39:33.941: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 28 12:39:39.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8740" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":283,"completed":155,"skipped":2442,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-4fb63025-eb05-476f-9e08-399121aeefcc
STEP: Creating a pod to test consume configMaps
Mar 28 12:39:40.250: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-862216d1-748b-4b17-9154-c39ba04baf2f" in namespace "projected-7372" to be "Succeeded or Failed"
Mar 28 12:39:40.303: INFO: Pod "pod-projected-configmaps-862216d1-748b-4b17-9154-c39ba04baf2f": Phase="Pending", Reason="", readiness=false. Elapsed: 52.509277ms
Mar 28 12:39:42.353: INFO: Pod "pod-projected-configmaps-862216d1-748b-4b17-9154-c39ba04baf2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.102606118s
STEP: Saw pod success
Mar 28 12:39:42.353: INFO: Pod "pod-projected-configmaps-862216d1-748b-4b17-9154-c39ba04baf2f" satisfied condition "Succeeded or Failed"
Mar 28 12:39:42.385: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-projected-configmaps-862216d1-748b-4b17-9154-c39ba04baf2f container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 28 12:39:42.466: INFO: Waiting for pod pod-projected-configmaps-862216d1-748b-4b17-9154-c39ba04baf2f to disappear
Mar 28 12:39:42.498: INFO: Pod pod-projected-configmaps-862216d1-748b-4b17-9154-c39ba04baf2f no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 28 12:39:42.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7372" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":283,"completed":156,"skipped":2451,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 28 12:39:42.590: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
Mar 28 12:39:42.754: INFO: Waiting up to 5m0s for pod "pod-5481ee53-e8f5-4971-bb20-d92198a6ee45" in namespace "emptydir-2610" to be "Succeeded or Failed"
Mar 28 12:39:42.785: INFO: Pod "pod-5481ee53-e8f5-4971-bb20-d92198a6ee45": Phase="Pending", Reason="", readiness=false. Elapsed: 30.805342ms
Mar 28 12:39:44.815: INFO: Pod "pod-5481ee53-e8f5-4971-bb20-d92198a6ee45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060901968s
STEP: Saw pod success
Mar 28 12:39:44.815: INFO: Pod "pod-5481ee53-e8f5-4971-bb20-d92198a6ee45" satisfied condition "Succeeded or Failed"
Mar 28 12:39:44.846: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-5481ee53-e8f5-4971-bb20-d92198a6ee45 container test-container: <nil>
STEP: delete the pod
Mar 28 12:39:44.924: INFO: Waiting for pod pod-5481ee53-e8f5-4971-bb20-d92198a6ee45 to disappear
Mar 28 12:39:44.953: INFO: Pod pod-5481ee53-e8f5-4971-bb20-d92198a6ee45 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 28 12:39:44.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2610" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":157,"skipped":2484,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 57 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 28 12:39:49.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3320" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":283,"completed":158,"skipped":2488,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Mar 28 12:39:51.662: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:39:51.693: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:39:51.920: INFO: Unable to read jessie_udp@dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:39:51.953: INFO: Unable to read jessie_tcp@dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:39:51.986: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:39:52.017: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:39:52.211: INFO: Lookups using dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546 failed for: [wheezy_udp@dns-test-service.dns-4282.svc.cluster.local wheezy_tcp@dns-test-service.dns-4282.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local jessie_udp@dns-test-service.dns-4282.svc.cluster.local jessie_tcp@dns-test-service.dns-4282.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local]

Mar 28 12:39:57.244: INFO: Unable to read wheezy_udp@dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:39:57.276: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:39:57.307: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:39:57.338: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:39:57.562: INFO: Unable to read jessie_udp@dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:39:57.594: INFO: Unable to read jessie_tcp@dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:39:57.626: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:39:57.658: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:39:57.850: INFO: Lookups using dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546 failed for: [wheezy_udp@dns-test-service.dns-4282.svc.cluster.local wheezy_tcp@dns-test-service.dns-4282.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local jessie_udp@dns-test-service.dns-4282.svc.cluster.local jessie_tcp@dns-test-service.dns-4282.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local]

Mar 28 12:40:02.251: INFO: Unable to read wheezy_udp@dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:02.285: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:02.316: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:02.348: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:02.575: INFO: Unable to read jessie_udp@dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:02.607: INFO: Unable to read jessie_tcp@dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:02.639: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:02.672: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:02.867: INFO: Lookups using dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546 failed for: [wheezy_udp@dns-test-service.dns-4282.svc.cluster.local wheezy_tcp@dns-test-service.dns-4282.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local jessie_udp@dns-test-service.dns-4282.svc.cluster.local jessie_tcp@dns-test-service.dns-4282.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local]

Mar 28 12:40:07.248: INFO: Unable to read wheezy_udp@dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:07.280: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:07.318: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:07.349: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:07.574: INFO: Unable to read jessie_udp@dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:07.607: INFO: Unable to read jessie_tcp@dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:07.638: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:07.669: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:07.860: INFO: Lookups using dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546 failed for: [wheezy_udp@dns-test-service.dns-4282.svc.cluster.local wheezy_tcp@dns-test-service.dns-4282.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local jessie_udp@dns-test-service.dns-4282.svc.cluster.local jessie_tcp@dns-test-service.dns-4282.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local]

Mar 28 12:40:12.243: INFO: Unable to read wheezy_udp@dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:12.275: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:12.306: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:12.338: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:12.564: INFO: Unable to read jessie_udp@dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:12.596: INFO: Unable to read jessie_tcp@dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:12.627: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:12.660: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:12.851: INFO: Lookups using dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546 failed for: [wheezy_udp@dns-test-service.dns-4282.svc.cluster.local wheezy_tcp@dns-test-service.dns-4282.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local jessie_udp@dns-test-service.dns-4282.svc.cluster.local jessie_tcp@dns-test-service.dns-4282.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local]

Mar 28 12:40:17.243: INFO: Unable to read wheezy_udp@dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:17.276: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:17.315: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:17.347: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:17.573: INFO: Unable to read jessie_udp@dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:17.604: INFO: Unable to read jessie_tcp@dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:17.637: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:17.669: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local from pod dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546: the server could not find the requested resource (get pods dns-test-557e489a-b775-4b96-8aae-cc2d220d7546)
Mar 28 12:40:17.862: INFO: Lookups using dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546 failed for: [wheezy_udp@dns-test-service.dns-4282.svc.cluster.local wheezy_tcp@dns-test-service.dns-4282.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local jessie_udp@dns-test-service.dns-4282.svc.cluster.local jessie_tcp@dns-test-service.dns-4282.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4282.svc.cluster.local]

Mar 28 12:40:22.852: INFO: DNS probes using dns-4282/dns-test-557e489a-b775-4b96-8aae-cc2d220d7546 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 28 12:40:23.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4282" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":283,"completed":159,"skipped":2493,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-eb25d0b6-2dc3-4ca0-8d7b-2e56abc6f8ea
STEP: Creating a pod to test consume configMaps
Mar 28 12:40:23.321: INFO: Waiting up to 5m0s for pod "pod-configmaps-95e71cf2-b0e0-4374-ae07-c84cd9f5741d" in namespace "configmap-4116" to be "Succeeded or Failed"
Mar 28 12:40:23.351: INFO: Pod "pod-configmaps-95e71cf2-b0e0-4374-ae07-c84cd9f5741d": Phase="Pending", Reason="", readiness=false. Elapsed: 29.676733ms
Mar 28 12:40:25.382: INFO: Pod "pod-configmaps-95e71cf2-b0e0-4374-ae07-c84cd9f5741d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060588762s
STEP: Saw pod success
Mar 28 12:40:25.382: INFO: Pod "pod-configmaps-95e71cf2-b0e0-4374-ae07-c84cd9f5741d" satisfied condition "Succeeded or Failed"
Mar 28 12:40:25.412: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-configmaps-95e71cf2-b0e0-4374-ae07-c84cd9f5741d container configmap-volume-test: <nil>
STEP: delete the pod
Mar 28 12:40:25.490: INFO: Waiting for pod pod-configmaps-95e71cf2-b0e0-4374-ae07-c84cd9f5741d to disappear
Mar 28 12:40:25.521: INFO: Pod pod-configmaps-95e71cf2-b0e0-4374-ae07-c84cd9f5741d no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 28 12:40:25.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4116" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":283,"completed":160,"skipped":2517,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 9 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 28 12:40:25.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8879" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":283,"completed":161,"skipped":2521,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 28 12:40:30.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5729" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":283,"completed":162,"skipped":2562,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 7 lines ...
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 28 12:41:30.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1200" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":283,"completed":163,"skipped":2591,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-202f3efb-64be-46a3-bb64-7822ae2dbe78
STEP: Creating a pod to test consume configMaps
Mar 28 12:41:30.840: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-888ef79a-7df6-4402-985f-87577148eaa2" in namespace "projected-3087" to be "Succeeded or Failed"
Mar 28 12:41:30.870: INFO: Pod "pod-projected-configmaps-888ef79a-7df6-4402-985f-87577148eaa2": Phase="Pending", Reason="", readiness=false. Elapsed: 29.73161ms
Mar 28 12:41:32.900: INFO: Pod "pod-projected-configmaps-888ef79a-7df6-4402-985f-87577148eaa2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060134851s
STEP: Saw pod success
Mar 28 12:41:32.900: INFO: Pod "pod-projected-configmaps-888ef79a-7df6-4402-985f-87577148eaa2" satisfied condition "Succeeded or Failed"
Mar 28 12:41:32.930: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-projected-configmaps-888ef79a-7df6-4402-985f-87577148eaa2 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 28 12:41:33.011: INFO: Waiting for pod pod-projected-configmaps-888ef79a-7df6-4402-985f-87577148eaa2 to disappear
Mar 28 12:41:33.041: INFO: Pod pod-projected-configmaps-888ef79a-7df6-4402-985f-87577148eaa2 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 28 12:41:33.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3087" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":164,"skipped":2620,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 19 lines ...
Mar 28 12:41:36.509: INFO: Deleting pod "var-expansion-f457d398-526b-4d38-8e7d-22c7047e21a1" in namespace "var-expansion-1337"
Mar 28 12:41:36.544: INFO: Wait up to 5m0s for pod "var-expansion-f457d398-526b-4d38-8e7d-22c7047e21a1" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 28 12:42:10.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1337" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":283,"completed":165,"skipped":2633,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 28 12:42:10.863: INFO: Waiting up to 5m0s for pod "busybox-user-65534-9611409e-7789-47a1-88c8-f203fed9fe8f" in namespace "security-context-test-6642" to be "Succeeded or Failed"
Mar 28 12:42:10.894: INFO: Pod "busybox-user-65534-9611409e-7789-47a1-88c8-f203fed9fe8f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.611685ms
Mar 28 12:42:12.925: INFO: Pod "busybox-user-65534-9611409e-7789-47a1-88c8-f203fed9fe8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061627237s
Mar 28 12:42:12.925: INFO: Pod "busybox-user-65534-9611409e-7789-47a1-88c8-f203fed9fe8f" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 28 12:42:12.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6642" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":166,"skipped":2658,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Mar 28 12:42:13.339: INFO: stderr: ""
Mar 28 12:42:13.339: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.1.101+b4c82622ec9d1e\", GitCommit:\"b4c82622ec9d1e58ae2c9f901e51353f0661cf20\", GitTreeState:\"clean\", BuildDate:\"2020-02-11T14:24:02Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"16\", GitVersion:\"v1.16.1\", GitCommit:\"d647ddbd755faf07169599a625faf302ffc34458\", GitTreeState:\"clean\", BuildDate:\"2019-10-02T16:51:36Z\", GoVersion:\"go1.12.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 28 12:42:13.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3634" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":283,"completed":167,"skipped":2714,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-39e22edf-bd19-42bc-990e-148da3167b83
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 28 12:42:20.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7102" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":168,"skipped":2763,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected combined
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-projected-all-test-volume-018f495a-d8a9-49e7-85d6-929354220cc2
STEP: Creating secret with name secret-projected-all-test-volume-bc619ea2-d4bb-41b9-8fdb-5ffd6158c078
STEP: Creating a pod to test Check all projections for projected volume plugin
Mar 28 12:42:20.487: INFO: Waiting up to 5m0s for pod "projected-volume-6162223b-9708-4373-9ebd-c9130df3daf4" in namespace "projected-3942" to be "Succeeded or Failed"
Mar 28 12:42:20.517: INFO: Pod "projected-volume-6162223b-9708-4373-9ebd-c9130df3daf4": Phase="Pending", Reason="", readiness=false. Elapsed: 29.166779ms
Mar 28 12:42:22.547: INFO: Pod "projected-volume-6162223b-9708-4373-9ebd-c9130df3daf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059810697s
STEP: Saw pod success
Mar 28 12:42:22.547: INFO: Pod "projected-volume-6162223b-9708-4373-9ebd-c9130df3daf4" satisfied condition "Succeeded or Failed"
Mar 28 12:42:22.578: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod projected-volume-6162223b-9708-4373-9ebd-c9130df3daf4 container projected-all-volume-test: <nil>
STEP: delete the pod
Mar 28 12:42:22.665: INFO: Waiting for pod projected-volume-6162223b-9708-4373-9ebd-c9130df3daf4 to disappear
Mar 28 12:42:22.696: INFO: Pod projected-volume-6162223b-9708-4373-9ebd-c9130df3daf4 no longer exists
[AfterEach] [sig-storage] Projected combined
  test/e2e/framework/framework.go:175
Mar 28 12:42:22.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3942" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":283,"completed":169,"skipped":2793,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Mar 28 12:42:29.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4499" for this suite.
STEP: Destroying namespace "webhook-4499-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":283,"completed":170,"skipped":2802,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-secret-hg5p
STEP: Creating a pod to test atomic-volume-subpath
Mar 28 12:42:29.882: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-hg5p" in namespace "subpath-9662" to be "Succeeded or Failed"
Mar 28 12:42:29.922: INFO: Pod "pod-subpath-test-secret-hg5p": Phase="Pending", Reason="", readiness=false. Elapsed: 39.275106ms
Mar 28 12:42:31.952: INFO: Pod "pod-subpath-test-secret-hg5p": Phase="Running", Reason="", readiness=true. Elapsed: 2.070038029s
Mar 28 12:42:33.983: INFO: Pod "pod-subpath-test-secret-hg5p": Phase="Running", Reason="", readiness=true. Elapsed: 4.100903656s
Mar 28 12:42:36.015: INFO: Pod "pod-subpath-test-secret-hg5p": Phase="Running", Reason="", readiness=true. Elapsed: 6.132685091s
Mar 28 12:42:38.046: INFO: Pod "pod-subpath-test-secret-hg5p": Phase="Running", Reason="", readiness=true. Elapsed: 8.163936423s
Mar 28 12:42:40.076: INFO: Pod "pod-subpath-test-secret-hg5p": Phase="Running", Reason="", readiness=true. Elapsed: 10.194113079s
Mar 28 12:42:42.107: INFO: Pod "pod-subpath-test-secret-hg5p": Phase="Running", Reason="", readiness=true. Elapsed: 12.225072633s
Mar 28 12:42:44.138: INFO: Pod "pod-subpath-test-secret-hg5p": Phase="Running", Reason="", readiness=true. Elapsed: 14.255725807s
Mar 28 12:42:46.169: INFO: Pod "pod-subpath-test-secret-hg5p": Phase="Running", Reason="", readiness=true. Elapsed: 16.286372772s
Mar 28 12:42:48.200: INFO: Pod "pod-subpath-test-secret-hg5p": Phase="Running", Reason="", readiness=true. Elapsed: 18.317280591s
Mar 28 12:42:50.231: INFO: Pod "pod-subpath-test-secret-hg5p": Phase="Running", Reason="", readiness=true. Elapsed: 20.348454421s
Mar 28 12:42:52.261: INFO: Pod "pod-subpath-test-secret-hg5p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.378746874s
STEP: Saw pod success
Mar 28 12:42:52.261: INFO: Pod "pod-subpath-test-secret-hg5p" satisfied condition "Succeeded or Failed"
Mar 28 12:42:52.292: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-subpath-test-secret-hg5p container test-container-subpath-secret-hg5p: <nil>
STEP: delete the pod
Mar 28 12:42:52.374: INFO: Waiting for pod pod-subpath-test-secret-hg5p to disappear
Mar 28 12:42:52.405: INFO: Pod pod-subpath-test-secret-hg5p no longer exists
STEP: Deleting pod pod-subpath-test-secret-hg5p
Mar 28 12:42:52.405: INFO: Deleting pod "pod-subpath-test-secret-hg5p" in namespace "subpath-9662"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 28 12:42:52.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9662" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":283,"completed":171,"skipped":2820,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  test/e2e/framework/framework.go:175
Mar 28 12:42:57.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1139" for this suite.
STEP: Destroying namespace "webhook-1139-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":283,"completed":172,"skipped":2841,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 28 12:42:57.596: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-84324263-8975-40d3-8b40-f9e205d197a5" in namespace "security-context-test-8755" to be "Succeeded or Failed"
Mar 28 12:42:57.627: INFO: Pod "alpine-nnp-false-84324263-8975-40d3-8b40-f9e205d197a5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.982758ms
Mar 28 12:42:59.658: INFO: Pod "alpine-nnp-false-84324263-8975-40d3-8b40-f9e205d197a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061716368s
Mar 28 12:42:59.658: INFO: Pod "alpine-nnp-false-84324263-8975-40d3-8b40-f9e205d197a5" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 28 12:42:59.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8755" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":173,"skipped":2850,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 28 12:42:59.959: INFO: Waiting up to 5m0s for pod "downwardapi-volume-043bc3a5-7eb4-4f98-8ee2-c837aba16146" in namespace "projected-5507" to be "Succeeded or Failed"
Mar 28 12:42:59.991: INFO: Pod "downwardapi-volume-043bc3a5-7eb4-4f98-8ee2-c837aba16146": Phase="Pending", Reason="", readiness=false. Elapsed: 31.462847ms
Mar 28 12:43:02.022: INFO: Pod "downwardapi-volume-043bc3a5-7eb4-4f98-8ee2-c837aba16146": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062847096s
STEP: Saw pod success
Mar 28 12:43:02.022: INFO: Pod "downwardapi-volume-043bc3a5-7eb4-4f98-8ee2-c837aba16146" satisfied condition "Succeeded or Failed"
Mar 28 12:43:02.053: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod downwardapi-volume-043bc3a5-7eb4-4f98-8ee2-c837aba16146 container client-container: <nil>
STEP: delete the pod
Mar 28 12:43:02.131: INFO: Waiting for pod downwardapi-volume-043bc3a5-7eb4-4f98-8ee2-c837aba16146 to disappear
Mar 28 12:43:02.163: INFO: Pod downwardapi-volume-043bc3a5-7eb4-4f98-8ee2-c837aba16146 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 28 12:43:02.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5507" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":174,"skipped":2861,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-998103d8-e97f-46b1-af3a-7be7679467ae
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 28 12:43:06.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3421" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":175,"skipped":2869,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 43 lines ...
Mar 28 12:43:24.088: INFO: Pod "test-rollover-deployment-78df7bc796-s5pkc" is available:
&Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-s5pkc test-rollover-deployment-78df7bc796- deployment-2368 /api/v1/namespaces/deployment-2368/pods/test-rollover-deployment-78df7bc796-s5pkc ed949a7b-6f65-43ff-b60d-365e4311d3a6 16827 0 2020-03-28 12:43:11 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:78df7bc796] map[cni.projectcalico.org/podIP:192.168.172.142/32] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 780f29b4-06ec-465c-be6f-dfed9a6b29da 0xc0016bc7c7 0xc0016bc7c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7zpph,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7zpph,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7zpph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:43:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:43:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:43:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 12:43:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:192.168.172.142,StartTime:2020-03-28 12:43:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-28 12:43:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://d738e60e207c766046bc342b46b6bbe01c8586491259deef0eec6c4a71f7433b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.172.142,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 28 12:43:24.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2368" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":283,"completed":176,"skipped":2874,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Mar 28 12:43:31.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4474" for this suite.
STEP: Destroying namespace "webhook-4474-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":283,"completed":177,"skipped":2876,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-6a38b325-c619-4403-b8f7-24bef7ab384a
STEP: Creating a pod to test consume configMaps
Mar 28 12:43:31.934: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ec2e4972-6083-4c38-98d0-92f57073a277" in namespace "projected-1945" to be "Succeeded or Failed"
Mar 28 12:43:31.965: INFO: Pod "pod-projected-configmaps-ec2e4972-6083-4c38-98d0-92f57073a277": Phase="Pending", Reason="", readiness=false. Elapsed: 30.586556ms
Mar 28 12:43:33.995: INFO: Pod "pod-projected-configmaps-ec2e4972-6083-4c38-98d0-92f57073a277": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060838359s
STEP: Saw pod success
Mar 28 12:43:33.995: INFO: Pod "pod-projected-configmaps-ec2e4972-6083-4c38-98d0-92f57073a277" satisfied condition "Succeeded or Failed"
Mar 28 12:43:34.027: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-projected-configmaps-ec2e4972-6083-4c38-98d0-92f57073a277 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 28 12:43:34.107: INFO: Waiting for pod pod-projected-configmaps-ec2e4972-6083-4c38-98d0-92f57073a277 to disappear
Mar 28 12:43:34.137: INFO: Pod pod-projected-configmaps-ec2e4972-6083-4c38-98d0-92f57073a277 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 28 12:43:34.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1945" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":178,"skipped":2893,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 28 12:43:34.229: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Mar 28 12:43:40.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9334" for this suite.
•{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":283,"completed":179,"skipped":2896,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 28 12:43:40.524: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar 28 12:43:40.691: INFO: Waiting up to 5m0s for pod "pod-2c05a0f1-ce64-4df7-83a8-64a16ff875f2" in namespace "emptydir-3758" to be "Succeeded or Failed"
Mar 28 12:43:40.723: INFO: Pod "pod-2c05a0f1-ce64-4df7-83a8-64a16ff875f2": Phase="Pending", Reason="", readiness=false. Elapsed: 32.058234ms
Mar 28 12:43:42.753: INFO: Pod "pod-2c05a0f1-ce64-4df7-83a8-64a16ff875f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062225447s
STEP: Saw pod success
Mar 28 12:43:42.753: INFO: Pod "pod-2c05a0f1-ce64-4df7-83a8-64a16ff875f2" satisfied condition "Succeeded or Failed"
Mar 28 12:43:42.783: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-2c05a0f1-ce64-4df7-83a8-64a16ff875f2 container test-container: <nil>
STEP: delete the pod
Mar 28 12:43:42.861: INFO: Waiting for pod pod-2c05a0f1-ce64-4df7-83a8-64a16ff875f2 to disappear
Mar 28 12:43:42.892: INFO: Pod pod-2c05a0f1-ce64-4df7-83a8-64a16ff875f2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 28 12:43:42.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3758" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":180,"skipped":2911,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 28 12:43:43.107: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 28 12:43:43.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-431" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":283,"completed":181,"skipped":2930,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 132 lines ...
Mar 28 12:44:37.737: INFO: ss-2  test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-28 12:44:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-28 12:44:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-28 12:44:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-28 12:44:05 +0000 UTC  }]
Mar 28 12:44:37.737: INFO: 
Mar 28 12:44:37.737: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2680
Mar 28 12:44:38.770: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:44:39.162: INFO: rc: 1
Mar 28 12:44:39.163: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Mar 28 12:44:49.163: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:44:49.383: INFO: rc: 1
Mar 28 12:44:49.383: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:44:59.384: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:44:59.607: INFO: rc: 1
Mar 28 12:44:59.607: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:45:09.607: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:45:09.829: INFO: rc: 1
Mar 28 12:45:09.829: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:45:19.829: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:45:20.048: INFO: rc: 1
Mar 28 12:45:20.048: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:45:30.048: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:45:30.269: INFO: rc: 1
Mar 28 12:45:30.269: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:45:40.269: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:45:40.490: INFO: rc: 1
Mar 28 12:45:40.490: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:45:50.490: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:45:50.708: INFO: rc: 1
Mar 28 12:45:50.708: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:46:00.709: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:46:00.929: INFO: rc: 1
Mar 28 12:46:00.929: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:46:10.929: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:46:11.153: INFO: rc: 1
Mar 28 12:46:11.153: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:46:21.153: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:46:21.381: INFO: rc: 1
Mar 28 12:46:21.381: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:46:31.381: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:46:31.608: INFO: rc: 1
Mar 28 12:46:31.608: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:46:41.608: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:46:41.829: INFO: rc: 1
Mar 28 12:46:41.829: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:46:51.829: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:46:52.049: INFO: rc: 1
Mar 28 12:46:52.049: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:47:02.049: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:47:02.271: INFO: rc: 1
Mar 28 12:47:02.272: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:47:12.272: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:47:12.490: INFO: rc: 1
Mar 28 12:47:12.490: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:47:22.490: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:47:22.717: INFO: rc: 1
Mar 28 12:47:22.717: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:47:32.718: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:47:32.939: INFO: rc: 1
Mar 28 12:47:32.939: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:47:42.939: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:47:43.156: INFO: rc: 1
Mar 28 12:47:43.156: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:47:53.156: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:47:53.380: INFO: rc: 1
Mar 28 12:47:53.380: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:48:03.380: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:48:03.601: INFO: rc: 1
Mar 28 12:48:03.601: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:48:13.602: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:48:13.822: INFO: rc: 1
Mar 28 12:48:13.822: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:48:23.822: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:48:24.041: INFO: rc: 1
Mar 28 12:48:24.041: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:48:34.042: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:48:34.259: INFO: rc: 1
Mar 28 12:48:34.259: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:48:44.260: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:48:44.481: INFO: rc: 1
Mar 28 12:48:44.481: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:48:54.481: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:48:54.701: INFO: rc: 1
Mar 28 12:48:54.701: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:49:04.702: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:49:04.928: INFO: rc: 1
Mar 28 12:49:04.928: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:49:14.928: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:49:15.145: INFO: rc: 1
Mar 28 12:49:15.145: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:49:25.146: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:49:25.393: INFO: rc: 1
Mar 28 12:49:25.393: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:49:35.393: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:49:35.616: INFO: rc: 1
Mar 28 12:49:35.616: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 28 12:49:45.617: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-2680 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:49:45.836: INFO: rc: 1
Mar 28 12:49:45.836: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
Mar 28 12:49:45.836: INFO: Scaling statefulset ss to 0
Mar 28 12:49:45.930: INFO: Waiting for statefulset status.replicas updated to 0
... skipping 13 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:592
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":283,"completed":182,"skipped":2969,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-75e364d2-c557-4d23-a7cc-29c44dad284e
STEP: Creating a pod to test consume secrets
Mar 28 12:49:46.601: INFO: Waiting up to 5m0s for pod "pod-secrets-ef8e7176-63a5-47ad-9123-8f1e1f2c1e19" in namespace "secrets-8123" to be "Succeeded or Failed"
Mar 28 12:49:46.631: INFO: Pod "pod-secrets-ef8e7176-63a5-47ad-9123-8f1e1f2c1e19": Phase="Pending", Reason="", readiness=false. Elapsed: 29.706857ms
Mar 28 12:49:48.661: INFO: Pod "pod-secrets-ef8e7176-63a5-47ad-9123-8f1e1f2c1e19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059807096s
STEP: Saw pod success
Mar 28 12:49:48.661: INFO: Pod "pod-secrets-ef8e7176-63a5-47ad-9123-8f1e1f2c1e19" satisfied condition "Succeeded or Failed"
Mar 28 12:49:48.692: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-secrets-ef8e7176-63a5-47ad-9123-8f1e1f2c1e19 container secret-volume-test: <nil>
STEP: delete the pod
Mar 28 12:49:48.790: INFO: Waiting for pod pod-secrets-ef8e7176-63a5-47ad-9123-8f1e1f2c1e19 to disappear
Mar 28 12:49:48.822: INFO: Pod pod-secrets-ef8e7176-63a5-47ad-9123-8f1e1f2c1e19 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 28 12:49:48.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8123" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":183,"skipped":2981,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 28 12:49:49.080: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51b9bdcb-3378-4b49-bbdb-4afaa662552c" in namespace "downward-api-551" to be "Succeeded or Failed"
Mar 28 12:49:49.111: INFO: Pod "downwardapi-volume-51b9bdcb-3378-4b49-bbdb-4afaa662552c": Phase="Pending", Reason="", readiness=false. Elapsed: 30.846843ms
Mar 28 12:49:51.142: INFO: Pod "downwardapi-volume-51b9bdcb-3378-4b49-bbdb-4afaa662552c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061859915s
STEP: Saw pod success
Mar 28 12:49:51.142: INFO: Pod "downwardapi-volume-51b9bdcb-3378-4b49-bbdb-4afaa662552c" satisfied condition "Succeeded or Failed"
Mar 28 12:49:51.172: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod downwardapi-volume-51b9bdcb-3378-4b49-bbdb-4afaa662552c container client-container: <nil>
STEP: delete the pod
Mar 28 12:49:51.256: INFO: Waiting for pod downwardapi-volume-51b9bdcb-3378-4b49-bbdb-4afaa662552c to disappear
Mar 28 12:49:51.285: INFO: Pod downwardapi-volume-51b9bdcb-3378-4b49-bbdb-4afaa662552c no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 28 12:49:51.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-551" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":283,"completed":184,"skipped":3001,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 28 12:49:56.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-4334" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":283,"completed":185,"skipped":3020,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 28 12:49:59.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1792" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":283,"completed":186,"skipped":3041,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Mar 28 12:49:59.468: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig proxy --unix-socket=/tmp/kubectl-proxy-unix323937150/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 28 12:49:59.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4737" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":283,"completed":187,"skipped":3044,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 28 12:49:59.605: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 28 12:49:59.770: INFO: Waiting up to 5m0s for pod "downward-api-7bffd8ce-4bd5-47e0-bad3-7dadb14906a0" in namespace "downward-api-3761" to be "Succeeded or Failed"
Mar 28 12:49:59.802: INFO: Pod "downward-api-7bffd8ce-4bd5-47e0-bad3-7dadb14906a0": Phase="Pending", Reason="", readiness=false. Elapsed: 31.746584ms
Mar 28 12:50:01.832: INFO: Pod "downward-api-7bffd8ce-4bd5-47e0-bad3-7dadb14906a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061654884s
STEP: Saw pod success
Mar 28 12:50:01.832: INFO: Pod "downward-api-7bffd8ce-4bd5-47e0-bad3-7dadb14906a0" satisfied condition "Succeeded or Failed"
Mar 28 12:50:01.863: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod downward-api-7bffd8ce-4bd5-47e0-bad3-7dadb14906a0 container dapi-container: <nil>
STEP: delete the pod
Mar 28 12:50:01.962: INFO: Waiting for pod downward-api-7bffd8ce-4bd5-47e0-bad3-7dadb14906a0 to disappear
Mar 28 12:50:01.993: INFO: Pod downward-api-7bffd8ce-4bd5-47e0-bad3-7dadb14906a0 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 28 12:50:01.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3761" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":283,"completed":188,"skipped":3044,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 13 lines ...
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 28 12:50:19.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3425" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":283,"completed":189,"skipped":3048,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 15 lines ...
Mar 28 12:50:25.372: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  test/e2e/framework/framework.go:597
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 28 12:50:38.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5438" for this suite.
STEP: Destroying namespace "webhook-5438-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":283,"completed":190,"skipped":3052,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 28 12:50:38.504: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 28 12:50:39.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6395" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":283,"completed":191,"skipped":3099,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name projected-secret-test-3e745086-ad3a-426f-805e-b9887ace51d9
STEP: Creating a pod to test consume secrets
Mar 28 12:50:39.792: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9bfdbfd8-f4a8-4550-9932-d6eaf581d4dc" in namespace "projected-2700" to be "Succeeded or Failed"
Mar 28 12:50:39.827: INFO: Pod "pod-projected-secrets-9bfdbfd8-f4a8-4550-9932-d6eaf581d4dc": Phase="Pending", Reason="", readiness=false. Elapsed: 34.945534ms
Mar 28 12:50:41.858: INFO: Pod "pod-projected-secrets-9bfdbfd8-f4a8-4550-9932-d6eaf581d4dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065946425s
STEP: Saw pod success
Mar 28 12:50:41.858: INFO: Pod "pod-projected-secrets-9bfdbfd8-f4a8-4550-9932-d6eaf581d4dc" satisfied condition "Succeeded or Failed"
Mar 28 12:50:41.888: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-projected-secrets-9bfdbfd8-f4a8-4550-9932-d6eaf581d4dc container secret-volume-test: <nil>
STEP: delete the pod
Mar 28 12:50:41.969: INFO: Waiting for pod pod-projected-secrets-9bfdbfd8-f4a8-4550-9932-d6eaf581d4dc to disappear
Mar 28 12:50:42.000: INFO: Pod pod-projected-secrets-9bfdbfd8-f4a8-4550-9932-d6eaf581d4dc no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 28 12:50:42.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2700" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":283,"completed":192,"skipped":3131,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 132 lines ...
Mar 28 12:51:11.163: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2412/pods","resourceVersion":"18758"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 28 12:51:11.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2412" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":283,"completed":193,"skipped":3135,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] PreStop
... skipping 25 lines ...
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  test/e2e/framework/framework.go:175
Mar 28 12:51:20.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-5522" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":283,"completed":194,"skipped":3139,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 85 lines ...
Mar 28 12:52:25.533: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar 28 12:52:25.533: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 28 12:52:25.533: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 28 12:52:25.533: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:52:25.860: INFO: rc: 1
Mar 28 12:52:25.860: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Mar 28 12:52:35.860: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:52:36.079: INFO: rc: 1
Mar 28 12:52:36.079: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:52:46.080: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:52:46.302: INFO: rc: 1
Mar 28 12:52:46.302: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:52:56.302: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:52:56.529: INFO: rc: 1
Mar 28 12:52:56.529: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:53:06.529: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:53:06.757: INFO: rc: 1
Mar 28 12:53:06.757: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:53:16.757: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:53:16.976: INFO: rc: 1
Mar 28 12:53:16.976: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:53:26.977: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:53:27.201: INFO: rc: 1
Mar 28 12:53:27.201: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:53:37.201: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:53:37.425: INFO: rc: 1
Mar 28 12:53:37.425: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:53:47.426: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:53:47.649: INFO: rc: 1
Mar 28 12:53:47.649: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:53:57.649: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:53:58.057: INFO: rc: 1
Mar 28 12:53:58.057: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:54:08.058: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:54:08.276: INFO: rc: 1
Mar 28 12:54:08.276: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:54:18.276: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:54:18.499: INFO: rc: 1
Mar 28 12:54:18.499: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:54:28.499: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:54:28.723: INFO: rc: 1
Mar 28 12:54:28.723: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:54:38.723: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:54:38.952: INFO: rc: 1
Mar 28 12:54:38.952: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:54:48.952: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:54:49.253: INFO: rc: 1
Mar 28 12:54:49.253: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:54:59.253: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:54:59.492: INFO: rc: 1
Mar 28 12:54:59.492: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:55:09.492: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:55:09.717: INFO: rc: 1
Mar 28 12:55:09.717: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:55:19.718: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:55:19.945: INFO: rc: 1
Mar 28 12:55:19.945: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:55:29.945: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:55:30.163: INFO: rc: 1
Mar 28 12:55:30.163: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:55:40.164: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:55:40.388: INFO: rc: 1
Mar 28 12:55:40.389: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:55:50.389: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:55:50.617: INFO: rc: 1
Mar 28 12:55:50.617: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:56:00.618: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:56:00.836: INFO: rc: 1
Mar 28 12:56:00.836: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:56:10.836: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:56:11.057: INFO: rc: 1
Mar 28 12:56:11.057: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:56:21.057: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:56:21.288: INFO: rc: 1
Mar 28 12:56:21.288: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:56:31.289: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:56:31.511: INFO: rc: 1
Mar 28 12:56:31.511: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:56:41.512: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:56:41.732: INFO: rc: 1
Mar 28 12:56:41.732: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:56:51.732: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:56:51.960: INFO: rc: 1
Mar 28 12:56:51.960: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:57:01.961: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:57:02.189: INFO: rc: 1
Mar 28 12:57:02.189: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:57:12.189: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:57:12.413: INFO: rc: 1
Mar 28 12:57:12.413: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:57:22.413: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:57:22.638: INFO: rc: 1
Mar 28 12:57:22.638: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 28 12:57:32.638: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-8171 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 28 12:57:32.856: INFO: rc: 1
Mar 28 12:57:32.856: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Mar 28 12:57:32.856: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
... skipping 13 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:592
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":283,"completed":195,"skipped":3143,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Mar 28 12:57:33.612: INFO: stderr: ""
Mar 28 12:57:33.612: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://34.102.168.175:443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://34.102.168.175:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 28 12:57:33.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4148" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":283,"completed":196,"skipped":3157,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 8 lines ...
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 28 12:57:36.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8939" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":283,"completed":197,"skipped":3227,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating secret secrets-3048/secret-test-c341d2f5-4d46-41be-b95c-7e2fefa063a5
STEP: Creating a pod to test consume secrets
Mar 28 12:57:36.307: INFO: Waiting up to 5m0s for pod "pod-configmaps-690d9d40-7189-4a40-94e4-365c3b98c507" in namespace "secrets-3048" to be "Succeeded or Failed"
Mar 28 12:57:36.342: INFO: Pod "pod-configmaps-690d9d40-7189-4a40-94e4-365c3b98c507": Phase="Pending", Reason="", readiness=false. Elapsed: 34.129996ms
Mar 28 12:57:38.372: INFO: Pod "pod-configmaps-690d9d40-7189-4a40-94e4-365c3b98c507": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064203862s
STEP: Saw pod success
Mar 28 12:57:38.372: INFO: Pod "pod-configmaps-690d9d40-7189-4a40-94e4-365c3b98c507" satisfied condition "Succeeded or Failed"
Mar 28 12:57:38.402: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-configmaps-690d9d40-7189-4a40-94e4-365c3b98c507 container env-test: <nil>
STEP: delete the pod
Mar 28 12:57:38.495: INFO: Waiting for pod pod-configmaps-690d9d40-7189-4a40-94e4-365c3b98c507 to disappear
Mar 28 12:57:38.526: INFO: Pod pod-configmaps-690d9d40-7189-4a40-94e4-365c3b98c507 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Mar 28 12:57:38.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3048" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":283,"completed":198,"skipped":3243,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 28 12:57:38.622: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name secret-emptykey-test-8d1d9e3a-b848-4667-af44-b018f9787f16
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Mar 28 12:57:38.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7619" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":283,"completed":199,"skipped":3248,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
Mar 28 12:57:52.629: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 28 12:57:55.948: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 28 12:58:08.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8347" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":283,"completed":200,"skipped":3251,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 26 lines ...
Mar 28 12:58:59.488: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1267 /api/v1/namespaces/watch-1267/configmaps/e2e-watch-test-configmap-b 5755d7ba-7cb8-406e-865a-55d769d1661b 20095 0 2020-03-28 12:58:49 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 28 12:58:59.489: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1267 /api/v1/namespaces/watch-1267/configmaps/e2e-watch-test-configmap-b 5755d7ba-7cb8-406e-865a-55d769d1661b 20095 0 2020-03-28 12:58:49 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 28 12:59:09.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1267" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":283,"completed":201,"skipped":3274,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] HostPath
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test hostPath mode
Mar 28 12:59:09.751: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5909" to be "Succeeded or Failed"
Mar 28 12:59:09.782: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 30.47963ms
Mar 28 12:59:11.812: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060792419s
STEP: Saw pod success
Mar 28 12:59:11.812: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Mar 28 12:59:11.843: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Mar 28 12:59:11.929: INFO: Waiting for pod pod-host-path-test to disappear
Mar 28 12:59:11.959: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  test/e2e/framework/framework.go:175
Mar 28 12:59:11.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-5909" for this suite.
•{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":202,"skipped":3291,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 28 12:59:13.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0328 12:59:13.038879   24783 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
STEP: Destroying namespace "gc-460" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":283,"completed":203,"skipped":3357,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 46 lines ...
Mar 28 12:59:31.163: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8002/pods","resourceVersion":"20335"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 28 12:59:31.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8002" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":283,"completed":204,"skipped":3392,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] 
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
... skipping 19 lines ...
Mar 28 13:01:10.977: INFO: Noticed Pod "taint-eviction-b2" gets evicted.
STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
  test/e2e/framework/framework.go:175
Mar 28 13:01:11.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-multiple-pods-205" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":283,"completed":205,"skipped":3392,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 28 lines ...
Mar 28 13:01:19.371: INFO: stderr: ""
Mar 28 13:01:19.371: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 28 13:01:19.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6264" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":283,"completed":206,"skipped":3425,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
Mar 28 13:01:22.335: INFO: Successfully updated pod "labelsupdateb5f285f1-8f65-475a-9145-234a1767b7f4"
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 28 13:01:26.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6503" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":283,"completed":207,"skipped":3458,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 28 13:01:26.704: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f78fe6f-31ff-4d71-8b45-39043e297bf0" in namespace "projected-9585" to be "Succeeded or Failed"
Mar 28 13:01:26.738: INFO: Pod "downwardapi-volume-4f78fe6f-31ff-4d71-8b45-39043e297bf0": Phase="Pending", Reason="", readiness=false. Elapsed: 33.744431ms
Mar 28 13:01:28.769: INFO: Pod "downwardapi-volume-4f78fe6f-31ff-4d71-8b45-39043e297bf0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064817979s
STEP: Saw pod success
Mar 28 13:01:28.769: INFO: Pod "downwardapi-volume-4f78fe6f-31ff-4d71-8b45-39043e297bf0" satisfied condition "Succeeded or Failed"
Mar 28 13:01:28.799: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod downwardapi-volume-4f78fe6f-31ff-4d71-8b45-39043e297bf0 container client-container: <nil>
STEP: delete the pod
Mar 28 13:01:28.896: INFO: Waiting for pod downwardapi-volume-4f78fe6f-31ff-4d71-8b45-39043e297bf0 to disappear
Mar 28 13:01:28.927: INFO: Pod downwardapi-volume-4f78fe6f-31ff-4d71-8b45-39043e297bf0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 28 13:01:28.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9585" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":283,"completed":208,"skipped":3460,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 28 13:01:29.023: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
Mar 28 13:03:29.282: INFO: Deleting pod "var-expansion-5607879e-9447-46dd-8b8c-0fb3941b2606" in namespace "var-expansion-4428"
Mar 28 13:03:29.317: INFO: Wait up to 5m0s for pod "var-expansion-5607879e-9447-46dd-8b8c-0fb3941b2606" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 28 13:03:33.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4428" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":283,"completed":209,"skipped":3488,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 28 13:03:33.636: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dff09ee0-7368-4fc5-9d4f-4ea2a3bd8554" in namespace "projected-1184" to be "Succeeded or Failed"
Mar 28 13:03:33.666: INFO: Pod "downwardapi-volume-dff09ee0-7368-4fc5-9d4f-4ea2a3bd8554": Phase="Pending", Reason="", readiness=false. Elapsed: 30.145304ms
Mar 28 13:03:35.697: INFO: Pod "downwardapi-volume-dff09ee0-7368-4fc5-9d4f-4ea2a3bd8554": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060996554s
STEP: Saw pod success
Mar 28 13:03:35.697: INFO: Pod "downwardapi-volume-dff09ee0-7368-4fc5-9d4f-4ea2a3bd8554" satisfied condition "Succeeded or Failed"
Mar 28 13:03:35.727: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod downwardapi-volume-dff09ee0-7368-4fc5-9d4f-4ea2a3bd8554 container client-container: <nil>
STEP: delete the pod
Mar 28 13:03:35.818: INFO: Waiting for pod downwardapi-volume-dff09ee0-7368-4fc5-9d4f-4ea2a3bd8554 to disappear
Mar 28 13:03:35.849: INFO: Pod downwardapi-volume-dff09ee0-7368-4fc5-9d4f-4ea2a3bd8554 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 28 13:03:35.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1184" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":283,"completed":210,"skipped":3502,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 25 lines ...
Mar 28 13:03:38.472: INFO: Pod "test-recreate-deployment-5f94c574ff-ldpwb" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-ldpwb test-recreate-deployment-5f94c574ff- deployment-3979 /api/v1/namespaces/deployment-3979/pods/test-recreate-deployment-5f94c574ff-ldpwb b21f0408-abe1-461e-b57f-456c7c54d360 21097 0 2020-03-28 13:03:38 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff a83f4a73-dd22-444d-a263-048445f57a96 0xc000815f57 0xc000815f58}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlgjr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlgjr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlgjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 13:03:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 13:03:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 13:03:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 13:03:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:,StartTime:2020-03-28 13:03:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 28 13:03:38.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3979" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":283,"completed":211,"skipped":3542,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 28 13:03:38.731: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f4abec44-95f5-4b7c-b2c7-1243e3365e08" in namespace "projected-6680" to be "Succeeded or Failed"
Mar 28 13:03:38.762: INFO: Pod "downwardapi-volume-f4abec44-95f5-4b7c-b2c7-1243e3365e08": Phase="Pending", Reason="", readiness=false. Elapsed: 31.01634ms
Mar 28 13:03:40.793: INFO: Pod "downwardapi-volume-f4abec44-95f5-4b7c-b2c7-1243e3365e08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061716664s
STEP: Saw pod success
Mar 28 13:03:40.793: INFO: Pod "downwardapi-volume-f4abec44-95f5-4b7c-b2c7-1243e3365e08" satisfied condition "Succeeded or Failed"
Mar 28 13:03:40.824: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod downwardapi-volume-f4abec44-95f5-4b7c-b2c7-1243e3365e08 container client-container: <nil>
STEP: delete the pod
Mar 28 13:03:40.917: INFO: Waiting for pod downwardapi-volume-f4abec44-95f5-4b7c-b2c7-1243e3365e08 to disappear
Mar 28 13:03:40.949: INFO: Pod downwardapi-volume-f4abec44-95f5-4b7c-b2c7-1243e3365e08 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 28 13:03:40.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6680" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":283,"completed":212,"skipped":3576,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 28 13:03:41.043: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
Mar 28 13:03:41.168: INFO: PodSpec: initContainers in spec.initContainers
Mar 28 13:04:26.134: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-8a6b7465-abf8-4a18-8cc2-9afd98e98c13", GenerateName:"", Namespace:"init-container-6234", SelfLink:"/api/v1/namespaces/init-container-6234/pods/pod-init-8a6b7465-abf8-4a18-8cc2-9afd98e98c13", UID:"a8d44971-eeb4-4995-8ee3-d4aed490b44c", ResourceVersion:"21302", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720997421, loc:(*time.Location)(0x7b56f20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"168879249"}, Annotations:map[string]string{"cni.projectcalico.org/podIP":"192.168.154.149/32"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-v4nv7", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00116dc40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v4nv7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v4nv7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v4nv7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000814428), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0004b6230), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0008144f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000814510)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000814518), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00081451c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720997421, loc:(*time.Location)(0x7b56f20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720997421, loc:(*time.Location)(0x7b56f20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720997421, loc:(*time.Location)(0x7b56f20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720997421, loc:(*time.Location)(0x7b56f20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.150.0.4", PodIP:"192.168.154.149", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.154.149"}}, StartTime:(*v1.Time)(0xc002f72160), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0004b6310)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0004b6380)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://d325500a86df5fb11e665f56157d8e170983ebd262abae0af4dcc02e7fd20545", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002f72220), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002f72180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00081473f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 28 13:04:26.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6234" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":283,"completed":213,"skipped":3600,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Mar 28 13:04:26.353: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 28 13:04:29.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-576" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":283,"completed":214,"skipped":3610,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
Mar 28 13:04:36.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6970" for this suite.
STEP: Destroying namespace "webhook-6970-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":283,"completed":215,"skipped":3630,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 13 lines ...
Mar 28 13:04:37.029: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-1978 /api/v1/namespaces/watch-1978/configmaps/e2e-watch-test-resource-version 11f9f9f0-1859-4f5a-9e20-91efc0ddafc9 21443 0 2020-03-28 13:04:36 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 28 13:04:37.029: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-1978 /api/v1/namespaces/watch-1978/configmaps/e2e-watch-test-resource-version 11f9f9f0-1859-4f5a-9e20-91efc0ddafc9 21446 0 2020-03-28 13:04:36 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 28 13:04:37.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1978" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":283,"completed":216,"skipped":3661,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 23 lines ...
Mar 28 13:04:51.619: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar 28 13:04:51.650: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 28 13:04:51.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9466" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":283,"completed":217,"skipped":3669,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 28 13:04:56.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-102" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":283,"completed":218,"skipped":3675,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 26 lines ...
Mar 28 13:05:15.744: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 28 13:05:15.999: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 28 13:05:15.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1718" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":283,"completed":219,"skipped":3714,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-1eb070f0-161b-457b-a30d-ed1133048ba4
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 28 13:05:22.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7978" for this suite.
•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":220,"skipped":3717,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-0d12652d-e0ac-4d99-9d49-8a470c468c0f
STEP: Creating a pod to test consume secrets
Mar 28 13:05:23.147: INFO: Waiting up to 5m0s for pod "pod-secrets-025018d8-a1b8-49e6-80da-d06fe7fdf9fe" in namespace "secrets-7204" to be "Succeeded or Failed"
Mar 28 13:05:23.178: INFO: Pod "pod-secrets-025018d8-a1b8-49e6-80da-d06fe7fdf9fe": Phase="Pending", Reason="", readiness=false. Elapsed: 30.893649ms
Mar 28 13:05:25.209: INFO: Pod "pod-secrets-025018d8-a1b8-49e6-80da-d06fe7fdf9fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062062537s
STEP: Saw pod success
Mar 28 13:05:25.210: INFO: Pod "pod-secrets-025018d8-a1b8-49e6-80da-d06fe7fdf9fe" satisfied condition "Succeeded or Failed"
Mar 28 13:05:25.240: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-secrets-025018d8-a1b8-49e6-80da-d06fe7fdf9fe container secret-volume-test: <nil>
STEP: delete the pod
Mar 28 13:05:25.320: INFO: Waiting for pod pod-secrets-025018d8-a1b8-49e6-80da-d06fe7fdf9fe to disappear
Mar 28 13:05:25.352: INFO: Pod pod-secrets-025018d8-a1b8-49e6-80da-d06fe7fdf9fe no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 28 13:05:25.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7204" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":221,"skipped":3741,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 28 13:05:25.443: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on tmpfs
Mar 28 13:05:25.611: INFO: Waiting up to 5m0s for pod "pod-efb9bf69-b402-41a3-a9c4-f68710f43811" in namespace "emptydir-4856" to be "Succeeded or Failed"
Mar 28 13:05:25.643: INFO: Pod "pod-efb9bf69-b402-41a3-a9c4-f68710f43811": Phase="Pending", Reason="", readiness=false. Elapsed: 32.54742ms
Mar 28 13:05:27.674: INFO: Pod "pod-efb9bf69-b402-41a3-a9c4-f68710f43811": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062976011s
STEP: Saw pod success
Mar 28 13:05:27.674: INFO: Pod "pod-efb9bf69-b402-41a3-a9c4-f68710f43811" satisfied condition "Succeeded or Failed"
Mar 28 13:05:27.706: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-efb9bf69-b402-41a3-a9c4-f68710f43811 container test-container: <nil>
STEP: delete the pod
Mar 28 13:05:27.785: INFO: Waiting for pod pod-efb9bf69-b402-41a3-a9c4-f68710f43811 to disappear
Mar 28 13:05:27.815: INFO: Pod pod-efb9bf69-b402-41a3-a9c4-f68710f43811 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 28 13:05:27.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4856" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":222,"skipped":3757,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 17 lines ...
Mar 28 13:05:36.392: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 28 13:05:36.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-335" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":283,"completed":223,"skipped":3781,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 28 13:05:36.685: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b6ca041-4222-433d-a2ee-6e26f7857137" in namespace "projected-1034" to be "Succeeded or Failed"
Mar 28 13:05:36.716: INFO: Pod "downwardapi-volume-6b6ca041-4222-433d-a2ee-6e26f7857137": Phase="Pending", Reason="", readiness=false. Elapsed: 31.304772ms
Mar 28 13:05:38.747: INFO: Pod "downwardapi-volume-6b6ca041-4222-433d-a2ee-6e26f7857137": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062075412s
STEP: Saw pod success
Mar 28 13:05:38.747: INFO: Pod "downwardapi-volume-6b6ca041-4222-433d-a2ee-6e26f7857137" satisfied condition "Succeeded or Failed"
Mar 28 13:05:38.778: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod downwardapi-volume-6b6ca041-4222-433d-a2ee-6e26f7857137 container client-container: <nil>
STEP: delete the pod
Mar 28 13:05:38.858: INFO: Waiting for pod downwardapi-volume-6b6ca041-4222-433d-a2ee-6e26f7857137 to disappear
Mar 28 13:05:38.890: INFO: Pod downwardapi-volume-6b6ca041-4222-433d-a2ee-6e26f7857137 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 28 13:05:38.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1034" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":283,"completed":224,"skipped":3792,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
Mar 28 13:05:43.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1843" for this suite.
STEP: Destroying namespace "webhook-1843-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":283,"completed":225,"skipped":3803,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
Mar 28 13:05:46.971: INFO: Unable to read jessie_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:47.003: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:47.036: INFO: Unable to read jessie_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:47.067: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:47.100: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:47.133: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:47.326: INFO: Lookups using dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9910 wheezy_tcp@dns-test-service.dns-9910 wheezy_udp@dns-test-service.dns-9910.svc wheezy_tcp@dns-test-service.dns-9910.svc wheezy_udp@_http._tcp.dns-test-service.dns-9910.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9910.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9910 jessie_tcp@dns-test-service.dns-9910 jessie_udp@dns-test-service.dns-9910.svc jessie_tcp@dns-test-service.dns-9910.svc jessie_udp@_http._tcp.dns-test-service.dns-9910.svc jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc]

Mar 28 13:05:52.360: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:52.393: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:52.426: INFO: Unable to read wheezy_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:52.460: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:52.492: INFO: Unable to read wheezy_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
... skipping 5 lines ...
Mar 28 13:05:52.883: INFO: Unable to read jessie_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:52.914: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:52.946: INFO: Unable to read jessie_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:52.978: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:53.011: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:53.043: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:53.240: INFO: Lookups using dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9910 wheezy_tcp@dns-test-service.dns-9910 wheezy_udp@dns-test-service.dns-9910.svc wheezy_tcp@dns-test-service.dns-9910.svc wheezy_udp@_http._tcp.dns-test-service.dns-9910.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9910.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9910 jessie_tcp@dns-test-service.dns-9910 jessie_udp@dns-test-service.dns-9910.svc jessie_tcp@dns-test-service.dns-9910.svc jessie_udp@_http._tcp.dns-test-service.dns-9910.svc jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc]

Mar 28 13:05:57.359: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:57.397: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:57.430: INFO: Unable to read wheezy_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:57.462: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:57.495: INFO: Unable to read wheezy_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
... skipping 5 lines ...
Mar 28 13:05:57.885: INFO: Unable to read jessie_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:57.917: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:57.948: INFO: Unable to read jessie_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:57.980: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:58.011: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:58.042: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:05:58.239: INFO: Lookups using dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9910 wheezy_tcp@dns-test-service.dns-9910 wheezy_udp@dns-test-service.dns-9910.svc wheezy_tcp@dns-test-service.dns-9910.svc wheezy_udp@_http._tcp.dns-test-service.dns-9910.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9910.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9910 jessie_tcp@dns-test-service.dns-9910 jessie_udp@dns-test-service.dns-9910.svc jessie_tcp@dns-test-service.dns-9910.svc jessie_udp@_http._tcp.dns-test-service.dns-9910.svc jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc]

Mar 28 13:06:02.357: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:02.389: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:02.422: INFO: Unable to read wheezy_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:02.455: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:02.486: INFO: Unable to read wheezy_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
... skipping 5 lines ...
Mar 28 13:06:02.877: INFO: Unable to read jessie_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:02.910: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:02.941: INFO: Unable to read jessie_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:02.972: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:03.006: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:03.038: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:03.230: INFO: Lookups using dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9910 wheezy_tcp@dns-test-service.dns-9910 wheezy_udp@dns-test-service.dns-9910.svc wheezy_tcp@dns-test-service.dns-9910.svc wheezy_udp@_http._tcp.dns-test-service.dns-9910.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9910.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9910 jessie_tcp@dns-test-service.dns-9910 jessie_udp@dns-test-service.dns-9910.svc jessie_tcp@dns-test-service.dns-9910.svc jessie_udp@_http._tcp.dns-test-service.dns-9910.svc jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc]

Mar 28 13:06:07.357: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:07.389: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:07.421: INFO: Unable to read wheezy_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:07.454: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:07.486: INFO: Unable to read wheezy_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
... skipping 5 lines ...
Mar 28 13:06:07.989: INFO: Unable to read jessie_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:08.020: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:08.053: INFO: Unable to read jessie_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:08.086: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:08.117: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:08.150: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:08.344: INFO: Lookups using dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9910 wheezy_tcp@dns-test-service.dns-9910 wheezy_udp@dns-test-service.dns-9910.svc wheezy_tcp@dns-test-service.dns-9910.svc wheezy_udp@_http._tcp.dns-test-service.dns-9910.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9910.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9910 jessie_tcp@dns-test-service.dns-9910 jessie_udp@dns-test-service.dns-9910.svc jessie_tcp@dns-test-service.dns-9910.svc jessie_udp@_http._tcp.dns-test-service.dns-9910.svc jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc]

Mar 28 13:06:12.357: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:12.388: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:12.420: INFO: Unable to read wheezy_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:12.452: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:12.484: INFO: Unable to read wheezy_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
... skipping 5 lines ...
Mar 28 13:06:12.871: INFO: Unable to read jessie_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:12.903: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:12.937: INFO: Unable to read jessie_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:12.968: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:13.001: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:13.043: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e: the server could not find the requested resource (get pods dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e)
Mar 28 13:06:13.241: INFO: Lookups using dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9910 wheezy_tcp@dns-test-service.dns-9910 wheezy_udp@dns-test-service.dns-9910.svc wheezy_tcp@dns-test-service.dns-9910.svc wheezy_udp@_http._tcp.dns-test-service.dns-9910.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9910.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9910 jessie_tcp@dns-test-service.dns-9910 jessie_udp@dns-test-service.dns-9910.svc jessie_tcp@dns-test-service.dns-9910.svc jessie_udp@_http._tcp.dns-test-service.dns-9910.svc jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc]

Mar 28 13:06:18.239: INFO: DNS probes using dns-9910/dns-test-ad0538a4-e7a7-4c71-8401-459b9b0da71e succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 28 13:06:18.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9910" for this suite.
•{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":283,"completed":226,"skipped":3809,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Events
... skipping 16 lines ...
Mar 28 13:06:24.853: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  test/e2e/framework/framework.go:175
Mar 28 13:06:24.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-210" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":283,"completed":227,"skipped":3814,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-2afb16c9-0bf7-4c01-8b33-3f34a86a76a6
STEP: Creating a pod to test consume configMaps
Mar 28 13:06:25.184: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3e24a1eb-58d9-48a4-a5c1-08885391ee41" in namespace "projected-5138" to be "Succeeded or Failed"
Mar 28 13:06:25.216: INFO: Pod "pod-projected-configmaps-3e24a1eb-58d9-48a4-a5c1-08885391ee41": Phase="Pending", Reason="", readiness=false. Elapsed: 32.253289ms
Mar 28 13:06:27.247: INFO: Pod "pod-projected-configmaps-3e24a1eb-58d9-48a4-a5c1-08885391ee41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06326185s
STEP: Saw pod success
Mar 28 13:06:27.247: INFO: Pod "pod-projected-configmaps-3e24a1eb-58d9-48a4-a5c1-08885391ee41" satisfied condition "Succeeded or Failed"
Mar 28 13:06:27.278: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-projected-configmaps-3e24a1eb-58d9-48a4-a5c1-08885391ee41 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 28 13:06:27.357: INFO: Waiting for pod pod-projected-configmaps-3e24a1eb-58d9-48a4-a5c1-08885391ee41 to disappear
Mar 28 13:06:27.389: INFO: Pod pod-projected-configmaps-3e24a1eb-58d9-48a4-a5c1-08885391ee41 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 28 13:06:27.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5138" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":228,"skipped":3817,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 28 13:06:27.482: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 28 13:06:27.648: INFO: Waiting up to 5m0s for pod "pod-20b16a09-e425-4736-ba33-3047f0fd63da" in namespace "emptydir-7069" to be "Succeeded or Failed"
Mar 28 13:06:27.687: INFO: Pod "pod-20b16a09-e425-4736-ba33-3047f0fd63da": Phase="Pending", Reason="", readiness=false. Elapsed: 38.426702ms
Mar 28 13:06:29.718: INFO: Pod "pod-20b16a09-e425-4736-ba33-3047f0fd63da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.069747057s
STEP: Saw pod success
Mar 28 13:06:29.718: INFO: Pod "pod-20b16a09-e425-4736-ba33-3047f0fd63da" satisfied condition "Succeeded or Failed"
Mar 28 13:06:29.748: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-20b16a09-e425-4736-ba33-3047f0fd63da container test-container: <nil>
STEP: delete the pod
Mar 28 13:06:29.827: INFO: Waiting for pod pod-20b16a09-e425-4736-ba33-3047f0fd63da to disappear
Mar 28 13:06:29.857: INFO: Pod pod-20b16a09-e425-4736-ba33-3047f0fd63da no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 28 13:06:29.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7069" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":229,"skipped":3851,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 28 13:06:29.951: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:135
[It] should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Mar 28 13:06:30.315: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-boskos-gce-project-02.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 28 13:06:30.315: INFO: DaemonSet pods can't tolerate node test1-controlplane-1.c.k8s-boskos-gce-project-02.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 28 13:06:30.315: INFO: DaemonSet pods can't tolerate node test1-controlplane-2.c.k8s-boskos-gce-project-02.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
... skipping 11 lines ...
Mar 28 13:06:32.434: INFO: Node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal is running more than one daemon pod
Mar 28 13:06:33.402: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-boskos-gce-project-02.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 28 13:06:33.402: INFO: DaemonSet pods can't tolerate node test1-controlplane-1.c.k8s-boskos-gce-project-02.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 28 13:06:33.402: INFO: DaemonSet pods can't tolerate node test1-controlplane-2.c.k8s-boskos-gce-project-02.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 28 13:06:33.434: INFO: Number of nodes with available pods: 2
Mar 28 13:06:33.434: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Mar 28 13:06:33.566: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-boskos-gce-project-02.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 28 13:06:33.566: INFO: DaemonSet pods can't tolerate node test1-controlplane-1.c.k8s-boskos-gce-project-02.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 28 13:06:33.566: INFO: DaemonSet pods can't tolerate node test1-controlplane-2.c.k8s-boskos-gce-project-02.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 28 13:06:33.596: INFO: Number of nodes with available pods: 1
Mar 28 13:06:33.596: INFO: Node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal is running more than one daemon pod
Mar 28 13:06:34.654: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-boskos-gce-project-02.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 28 13:06:34.654: INFO: DaemonSet pods can't tolerate node test1-controlplane-1.c.k8s-boskos-gce-project-02.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 28 13:06:34.654: INFO: DaemonSet pods can't tolerate node test1-controlplane-2.c.k8s-boskos-gce-project-02.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 28 13:06:34.686: INFO: Number of nodes with available pods: 2
Mar 28 13:06:34.686: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:101
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3917, will wait for the garbage collector to delete the pods
Mar 28 13:06:34.866: INFO: Deleting DaemonSet.extensions daemon-set took: 36.185065ms
Mar 28 13:06:35.167: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.288634ms
... skipping 4 lines ...
Mar 28 13:06:41.162: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3917/pods","resourceVersion":"22416"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 28 13:06:41.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3917" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":283,"completed":230,"skipped":3871,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Mar 28 13:06:45.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7569" for this suite.
STEP: Destroying namespace "webhook-7569-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":283,"completed":231,"skipped":3871,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 26 lines ...
Mar 28 13:06:50.680: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-4tx2v" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-4tx2v test-rolling-update-deployment-664dd8fc7f- deployment-45 /api/v1/namespaces/deployment-45/pods/test-rolling-update-deployment-664dd8fc7f-4tx2v 0e6c4436-38fc-4edc-9d56-8241e2fe7b60 22556 0 2020-03-28 13:06:48 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:664dd8fc7f] map[cni.projectcalico.org/podIP:192.168.154.167/32] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f 1a3a588f-372f-4ea1-8451-f13ae013b858 0xc0032b5df7 0xc0032b5df8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pq6z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pq6z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pq6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 13:06:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 13:06:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 13:06:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-28 13:06:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:192.168.154.167,StartTime:2020-03-28 13:06:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-28 13:06:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://9ca070ffe1dfeb9a089eb22200261bb39a4848de80ef7d0fab2b71dc809ce04e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.154.167,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 28 13:06:50.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-45" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":283,"completed":232,"skipped":3886,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 28 13:06:50.939: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b37a661f-4746-400d-8989-340483104aaa" in namespace "downward-api-5101" to be "Succeeded or Failed"
Mar 28 13:06:50.972: INFO: Pod "downwardapi-volume-b37a661f-4746-400d-8989-340483104aaa": Phase="Pending", Reason="", readiness=false. Elapsed: 32.775046ms
Mar 28 13:06:53.005: INFO: Pod "downwardapi-volume-b37a661f-4746-400d-8989-340483104aaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065707306s
STEP: Saw pod success
Mar 28 13:06:53.005: INFO: Pod "downwardapi-volume-b37a661f-4746-400d-8989-340483104aaa" satisfied condition "Succeeded or Failed"
Mar 28 13:06:53.035: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod downwardapi-volume-b37a661f-4746-400d-8989-340483104aaa container client-container: <nil>
STEP: delete the pod
Mar 28 13:06:53.115: INFO: Waiting for pod downwardapi-volume-b37a661f-4746-400d-8989-340483104aaa to disappear
Mar 28 13:06:53.146: INFO: Pod downwardapi-volume-b37a661f-4746-400d-8989-340483104aaa no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 28 13:06:53.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5101" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":233,"skipped":3941,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 28 13:07:04.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4945" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":283,"completed":234,"skipped":3959,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Mar 28 13:07:04.696: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's args
Mar 28 13:07:04.861: INFO: Waiting up to 5m0s for pod "var-expansion-f78de0a5-346b-402a-91c1-9d1e6cc67ffa" in namespace "var-expansion-5864" to be "Succeeded or Failed"
Mar 28 13:07:04.892: INFO: Pod "var-expansion-f78de0a5-346b-402a-91c1-9d1e6cc67ffa": Phase="Pending", Reason="", readiness=false. Elapsed: 30.854125ms
Mar 28 13:07:06.923: INFO: Pod "var-expansion-f78de0a5-346b-402a-91c1-9d1e6cc67ffa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062361737s
STEP: Saw pod success
Mar 28 13:07:06.923: INFO: Pod "var-expansion-f78de0a5-346b-402a-91c1-9d1e6cc67ffa" satisfied condition "Succeeded or Failed"
Mar 28 13:07:06.953: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod var-expansion-f78de0a5-346b-402a-91c1-9d1e6cc67ffa container dapi-container: <nil>
STEP: delete the pod
Mar 28 13:07:07.030: INFO: Waiting for pod var-expansion-f78de0a5-346b-402a-91c1-9d1e6cc67ffa to disappear
Mar 28 13:07:07.061: INFO: Pod var-expansion-f78de0a5-346b-402a-91c1-9d1e6cc67ffa no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 28 13:07:07.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5864" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":283,"completed":235,"skipped":3991,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-3052/configmap-test-15b7e247-0682-40ad-9b51-ec7d5f0b7013
STEP: Creating a pod to test consume configMaps
Mar 28 13:07:07.350: INFO: Waiting up to 5m0s for pod "pod-configmaps-7c91f1e0-c982-478c-a430-f5390eaa873c" in namespace "configmap-3052" to be "Succeeded or Failed"
Mar 28 13:07:07.381: INFO: Pod "pod-configmaps-7c91f1e0-c982-478c-a430-f5390eaa873c": Phase="Pending", Reason="", readiness=false. Elapsed: 30.529853ms
Mar 28 13:07:09.412: INFO: Pod "pod-configmaps-7c91f1e0-c982-478c-a430-f5390eaa873c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06177682s
STEP: Saw pod success
Mar 28 13:07:09.412: INFO: Pod "pod-configmaps-7c91f1e0-c982-478c-a430-f5390eaa873c" satisfied condition "Succeeded or Failed"
Mar 28 13:07:09.443: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-configmaps-7c91f1e0-c982-478c-a430-f5390eaa873c container env-test: <nil>
STEP: delete the pod
Mar 28 13:07:09.520: INFO: Waiting for pod pod-configmaps-7c91f1e0-c982-478c-a430-f5390eaa873c to disappear
Mar 28 13:07:09.552: INFO: Pod pod-configmaps-7c91f1e0-c982-478c-a430-f5390eaa873c no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Mar 28 13:07:09.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3052" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":283,"completed":236,"skipped":4003,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 28 13:07:11.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5225" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":237,"skipped":4021,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 28 13:07:14.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6288" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":283,"completed":238,"skipped":4051,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-e5a87d04-8203-4a30-902c-3b344712abeb
STEP: Creating a pod to test consume configMaps
Mar 28 13:07:14.928: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5380144b-c2c1-4e29-854a-52a20779923b" in namespace "projected-3699" to be "Succeeded or Failed"
Mar 28 13:07:14.962: INFO: Pod "pod-projected-configmaps-5380144b-c2c1-4e29-854a-52a20779923b": Phase="Pending", Reason="", readiness=false. Elapsed: 33.547542ms
Mar 28 13:07:16.993: INFO: Pod "pod-projected-configmaps-5380144b-c2c1-4e29-854a-52a20779923b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064098683s
STEP: Saw pod success
Mar 28 13:07:16.993: INFO: Pod "pod-projected-configmaps-5380144b-c2c1-4e29-854a-52a20779923b" satisfied condition "Succeeded or Failed"
Mar 28 13:07:17.023: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-projected-configmaps-5380144b-c2c1-4e29-854a-52a20779923b container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 28 13:07:17.103: INFO: Waiting for pod pod-projected-configmaps-5380144b-c2c1-4e29-854a-52a20779923b to disappear
Mar 28 13:07:17.134: INFO: Pod pod-projected-configmaps-5380144b-c2c1-4e29-854a-52a20779923b no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 28 13:07:17.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3699" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":239,"skipped":4051,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 28 13:07:17.226: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename webhook
... skipping 5 lines ...
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 28 13:07:18.452: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720997638, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720997638, loc:(*time.Location)(0x7b56f20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720997638, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720997638, loc:(*time.Location)(0x7b56f20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 28 13:07:21.527: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 28 13:07:21.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3929" for this suite.
STEP: Destroying namespace "webhook-3929-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":283,"completed":240,"skipped":4052,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] 
  removing taint cancels eviction [Disruptive] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
... skipping 20 lines ...
STEP: Waiting some time to make sure that toleration time passed.
Mar 28 13:09:37.764: INFO: Pod wasn't evicted. Test successful
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
  test/e2e/framework/framework.go:175
Mar 28 13:09:37.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-single-pod-998" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":283,"completed":241,"skipped":4068,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 28 lines ...
Mar 28 13:10:01.837: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 28 13:10:03.081: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 28 13:10:03.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4443" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":242,"skipped":4082,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 28 13:10:21.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9879" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":283,"completed":243,"skipped":4110,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-d2b73f8b-effe-4d15-b2c3-19ee77045be8
STEP: Creating a pod to test consume configMaps
Mar 28 13:10:21.459: INFO: Waiting up to 5m0s for pod "pod-configmaps-50f43302-d05d-4c00-808f-cbbbcd62c019" in namespace "configmap-7706" to be "Succeeded or Failed"
Mar 28 13:10:21.489: INFO: Pod "pod-configmaps-50f43302-d05d-4c00-808f-cbbbcd62c019": Phase="Pending", Reason="", readiness=false. Elapsed: 29.671404ms
Mar 28 13:10:23.520: INFO: Pod "pod-configmaps-50f43302-d05d-4c00-808f-cbbbcd62c019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060098826s
STEP: Saw pod success
Mar 28 13:10:23.520: INFO: Pod "pod-configmaps-50f43302-d05d-4c00-808f-cbbbcd62c019" satisfied condition "Succeeded or Failed"
Mar 28 13:10:23.550: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-configmaps-50f43302-d05d-4c00-808f-cbbbcd62c019 container configmap-volume-test: <nil>
STEP: delete the pod
Mar 28 13:10:23.645: INFO: Waiting for pod pod-configmaps-50f43302-d05d-4c00-808f-cbbbcd62c019 to disappear
Mar 28 13:10:23.676: INFO: Pod pod-configmaps-50f43302-d05d-4c00-808f-cbbbcd62c019 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 28 13:10:23.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7706" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":244,"skipped":4119,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 40 lines ...
• [SLOW TEST:305.004 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":283,"completed":245,"skipped":4149,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 28 13:15:28.947: INFO: Waiting up to 5m0s for pod "downwardapi-volume-35318ba0-d825-499a-a548-8af355b85637" in namespace "downward-api-9448" to be "Succeeded or Failed"
Mar 28 13:15:28.980: INFO: Pod "downwardapi-volume-35318ba0-d825-499a-a548-8af355b85637": Phase="Pending", Reason="", readiness=false. Elapsed: 32.726892ms
Mar 28 13:15:31.012: INFO: Pod "downwardapi-volume-35318ba0-d825-499a-a548-8af355b85637": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06493443s
Mar 28 13:15:33.043: INFO: Pod "downwardapi-volume-35318ba0-d825-499a-a548-8af355b85637": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095957911s
STEP: Saw pod success
Mar 28 13:15:33.043: INFO: Pod "downwardapi-volume-35318ba0-d825-499a-a548-8af355b85637" satisfied condition "Succeeded or Failed"
Mar 28 13:15:33.074: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod downwardapi-volume-35318ba0-d825-499a-a548-8af355b85637 container client-container: <nil>
STEP: delete the pod
Mar 28 13:15:33.165: INFO: Waiting for pod downwardapi-volume-35318ba0-d825-499a-a548-8af355b85637 to disappear
Mar 28 13:15:33.196: INFO: Pod downwardapi-volume-35318ba0-d825-499a-a548-8af355b85637 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 28 13:15:33.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9448" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":283,"completed":246,"skipped":4183,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 28 13:15:33.288: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 28 13:15:33.453: INFO: Waiting up to 5m0s for pod "downward-api-f81c45be-fd97-4c6e-b273-d388547402c1" in namespace "downward-api-5863" to be "Succeeded or Failed"
Mar 28 13:15:33.483: INFO: Pod "downward-api-f81c45be-fd97-4c6e-b273-d388547402c1": Phase="Pending", Reason="", readiness=false. Elapsed: 29.988812ms
Mar 28 13:15:35.514: INFO: Pod "downward-api-f81c45be-fd97-4c6e-b273-d388547402c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0611326s
STEP: Saw pod success
Mar 28 13:15:35.514: INFO: Pod "downward-api-f81c45be-fd97-4c6e-b273-d388547402c1" satisfied condition "Succeeded or Failed"
Mar 28 13:15:35.544: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod downward-api-f81c45be-fd97-4c6e-b273-d388547402c1 container dapi-container: <nil>
STEP: delete the pod
Mar 28 13:15:35.621: INFO: Waiting for pod downward-api-f81c45be-fd97-4c6e-b273-d388547402c1 to disappear
Mar 28 13:15:35.651: INFO: Pod downward-api-f81c45be-fd97-4c6e-b273-d388547402c1 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 28 13:15:35.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5863" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":283,"completed":247,"skipped":4183,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 62 lines ...
Mar 28 13:15:51.134: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3108/pods","resourceVersion":"24396"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 28 13:15:51.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3108" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":283,"completed":248,"skipped":4244,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-6abce4bb-7f4a-41bf-8cfe-075cb6f86d21
STEP: Creating a pod to test consume secrets
Mar 28 13:15:51.526: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bc447b28-1f3e-442f-8ed6-a27bce3d1fc1" in namespace "projected-2184" to be "Succeeded or Failed"
Mar 28 13:15:51.557: INFO: Pod "pod-projected-secrets-bc447b28-1f3e-442f-8ed6-a27bce3d1fc1": Phase="Pending", Reason="", readiness=false. Elapsed: 31.900617ms
Mar 28 13:15:53.588: INFO: Pod "pod-projected-secrets-bc447b28-1f3e-442f-8ed6-a27bce3d1fc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062606192s
STEP: Saw pod success
Mar 28 13:15:53.588: INFO: Pod "pod-projected-secrets-bc447b28-1f3e-442f-8ed6-a27bce3d1fc1" satisfied condition "Succeeded or Failed"
Mar 28 13:15:53.619: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-projected-secrets-bc447b28-1f3e-442f-8ed6-a27bce3d1fc1 container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 28 13:15:53.728: INFO: Waiting for pod pod-projected-secrets-bc447b28-1f3e-442f-8ed6-a27bce3d1fc1 to disappear
Mar 28 13:15:53.760: INFO: Pod pod-projected-secrets-bc447b28-1f3e-442f-8ed6-a27bce3d1fc1 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 28 13:15:53.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2184" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":249,"skipped":4254,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Mar 28 13:15:59.222: INFO: stderr: ""
Mar 28 13:15:59.222: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9945-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 28 13:16:02.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1677" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":283,"completed":250,"skipped":4257,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 28 13:16:02.278: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9cd3145a-8fd0-4982-86df-263ee09abd7b" in namespace "downward-api-3019" to be "Succeeded or Failed"
Mar 28 13:16:02.309: INFO: Pod "downwardapi-volume-9cd3145a-8fd0-4982-86df-263ee09abd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.911009ms
Mar 28 13:16:04.340: INFO: Pod "downwardapi-volume-9cd3145a-8fd0-4982-86df-263ee09abd7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061864304s
STEP: Saw pod success
Mar 28 13:16:04.340: INFO: Pod "downwardapi-volume-9cd3145a-8fd0-4982-86df-263ee09abd7b" satisfied condition "Succeeded or Failed"
Mar 28 13:16:04.370: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod downwardapi-volume-9cd3145a-8fd0-4982-86df-263ee09abd7b container client-container: <nil>
STEP: delete the pod
Mar 28 13:16:04.449: INFO: Waiting for pod downwardapi-volume-9cd3145a-8fd0-4982-86df-263ee09abd7b to disappear
Mar 28 13:16:04.480: INFO: Pod downwardapi-volume-9cd3145a-8fd0-4982-86df-263ee09abd7b no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 28 13:16:04.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3019" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":283,"completed":251,"skipped":4278,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 28 13:16:04.735: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2043e7b-5738-4ad4-8f1a-818d7658dd89" in namespace "downward-api-9133" to be "Succeeded or Failed"
Mar 28 13:16:04.775: INFO: Pod "downwardapi-volume-d2043e7b-5738-4ad4-8f1a-818d7658dd89": Phase="Pending", Reason="", readiness=false. Elapsed: 39.280858ms
Mar 28 13:16:06.806: INFO: Pod "downwardapi-volume-d2043e7b-5738-4ad4-8f1a-818d7658dd89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.070616179s
STEP: Saw pod success
Mar 28 13:16:06.806: INFO: Pod "downwardapi-volume-d2043e7b-5738-4ad4-8f1a-818d7658dd89" satisfied condition "Succeeded or Failed"
Mar 28 13:16:06.836: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod downwardapi-volume-d2043e7b-5738-4ad4-8f1a-818d7658dd89 container client-container: <nil>
STEP: delete the pod
Mar 28 13:16:06.914: INFO: Waiting for pod downwardapi-volume-d2043e7b-5738-4ad4-8f1a-818d7658dd89 to disappear
Mar 28 13:16:06.944: INFO: Pod downwardapi-volume-d2043e7b-5738-4ad4-8f1a-818d7658dd89 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 28 13:16:06.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9133" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":283,"completed":252,"skipped":4284,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-30304493-837e-4fe8-9ccc-88f764c372a5
STEP: Creating a pod to test consume secrets
Mar 28 13:16:07.238: INFO: Waiting up to 5m0s for pod "pod-secrets-e82cb53f-5ea0-4e23-93d6-3a1862d065f9" in namespace "secrets-1134" to be "Succeeded or Failed"
Mar 28 13:16:07.269: INFO: Pod "pod-secrets-e82cb53f-5ea0-4e23-93d6-3a1862d065f9": Phase="Pending", Reason="", readiness=false. Elapsed: 31.224861ms
Mar 28 13:16:09.300: INFO: Pod "pod-secrets-e82cb53f-5ea0-4e23-93d6-3a1862d065f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062173783s
STEP: Saw pod success
Mar 28 13:16:09.300: INFO: Pod "pod-secrets-e82cb53f-5ea0-4e23-93d6-3a1862d065f9" satisfied condition "Succeeded or Failed"
Mar 28 13:16:09.331: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-secrets-e82cb53f-5ea0-4e23-93d6-3a1862d065f9 container secret-volume-test: <nil>
STEP: delete the pod
Mar 28 13:16:09.409: INFO: Waiting for pod pod-secrets-e82cb53f-5ea0-4e23-93d6-3a1862d065f9 to disappear
Mar 28 13:16:09.439: INFO: Pod pod-secrets-e82cb53f-5ea0-4e23-93d6-3a1862d065f9 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 28 13:16:09.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1134" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":253,"skipped":4285,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Mar 28 13:16:14.980: INFO: stderr: ""
Mar 28 13:16:14.980: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4592-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 28 13:16:18.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1195" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":283,"completed":254,"skipped":4294,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-cb0a96d9-c5ec-4e01-86f5-6950a1e7b124
STEP: Creating a pod to test consume configMaps
Mar 28 13:16:18.586: INFO: Waiting up to 5m0s for pod "pod-configmaps-1d012b43-75aa-4852-82b4-ae449c503dab" in namespace "configmap-4749" to be "Succeeded or Failed"
Mar 28 13:16:18.615: INFO: Pod "pod-configmaps-1d012b43-75aa-4852-82b4-ae449c503dab": Phase="Pending", Reason="", readiness=false. Elapsed: 29.580237ms
Mar 28 13:16:20.646: INFO: Pod "pod-configmaps-1d012b43-75aa-4852-82b4-ae449c503dab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060336273s
STEP: Saw pod success
Mar 28 13:16:20.646: INFO: Pod "pod-configmaps-1d012b43-75aa-4852-82b4-ae449c503dab" satisfied condition "Succeeded or Failed"
Mar 28 13:16:20.677: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-configmaps-1d012b43-75aa-4852-82b4-ae449c503dab container configmap-volume-test: <nil>
STEP: delete the pod
Mar 28 13:16:20.757: INFO: Waiting for pod pod-configmaps-1d012b43-75aa-4852-82b4-ae449c503dab to disappear
Mar 28 13:16:20.787: INFO: Pod pod-configmaps-1d012b43-75aa-4852-82b4-ae449c503dab no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 28 13:16:20.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4749" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":255,"skipped":4321,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-04bd0a51-f522-4a96-b7ff-4fc59358d8af
STEP: Creating a pod to test consume configMaps
Mar 28 13:16:21.080: INFO: Waiting up to 5m0s for pod "pod-configmaps-7595ed27-6158-450d-818e-61d4dbc960ca" in namespace "configmap-9334" to be "Succeeded or Failed"
Mar 28 13:16:21.111: INFO: Pod "pod-configmaps-7595ed27-6158-450d-818e-61d4dbc960ca": Phase="Pending", Reason="", readiness=false. Elapsed: 31.405412ms
Mar 28 13:16:23.142: INFO: Pod "pod-configmaps-7595ed27-6158-450d-818e-61d4dbc960ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062220264s
STEP: Saw pod success
Mar 28 13:16:23.142: INFO: Pod "pod-configmaps-7595ed27-6158-450d-818e-61d4dbc960ca" satisfied condition "Succeeded or Failed"
Mar 28 13:16:23.172: INFO: Trying to get logs from node test1-md-0-r7gkm.c.k8s-boskos-gce-project-02.internal pod pod-configmaps-7595ed27-6158-450d-818e-61d4dbc960ca container configmap-volume-test: <nil>
STEP: delete the pod
Mar 28 13:16:23.248: INFO: Waiting for pod pod-configmaps-7595ed27-6158-450d-818e-61d4dbc960ca to disappear
Mar 28 13:16:23.280: INFO: Pod pod-configmaps-7595ed27-6158-450d-818e-61d4dbc960ca no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 28 13:16:23.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9334" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":283,"completed":256,"skipped":4341,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 26 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 28 13:16:31.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7267" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":283,"completed":257,"skipped":4341,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 28 13:16:36.299: INFO: File wheezy_udp@dns-test-service-3.dns-2839.svc.cluster.local from pod  dns-2839/dns-test-74bf6a00-0705-4f38-8c31-b5d9f883574f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 28 13:16:36.332: INFO: File jessie_udp@dns-test-service-3.dns-2839.svc.cluster.local from pod  dns-2839/dns-test-74bf6a00-0705-4f38-8c31-b5d9f883574f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 28 13:16:36.332: INFO: Lookups using dns-2839/dns-test-74bf6a00-0705-4f38-8c31-b5d9f883574f failed for: [wheezy_udp@dns-test-service-3.dns-2839.svc.cluster.local jessie_udp@dns-test-service-3.dns-2839.svc.cluster.local]

Mar 28 13:16:41.366: INFO: File wheezy_udp@dns-test-service-3.dns-2839.svc.cluster.local from pod  dns-2839/dns-test-74bf6a00-0705-4f38-8c31-b5d9f883574f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 28 13:16:41.399: INFO: File jessie_udp@dns-test-service-3.dns-2839.svc.cluster.local from pod  dns-2839/dns-test-74bf6a00-0705-4f38-8c31-b5d9f883574f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 28 13:16:41.399: INFO: Lookups using dns-2839/dns-test-74bf6a00-0705-4f38-8c31-b5d9f883574f failed for: [wheezy_udp@dns-test-service-3.dns-2839.svc.cluster.local jessie_udp@dns-test-service-3.dns-2839.svc.cluster.local]

Mar 28 13:16:46.364: INFO: File wheezy_udp@dns-test-service-3.dns-2839.svc.cluster.local from pod  dns-2839/dns-test-74bf6a00-0705-4f38-8c31-b5d9f883574f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 28 13:16:46.396: INFO: File jessie_udp@dns-test-service-3.dns-2839.svc.cluster.local from pod  dns-2839/dns-test-74bf6a00-0705-4f38-8c31-b5d9f883574f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 28 13:16:46.396: INFO: Lookups using dns-2839/dns-test-74bf6a00-0705-4f38-8c31-b5d9f883574f failed for: [wheezy_udp@dns-test-service-3.dns-2839.svc.cluster.local jessie_udp@dns-test-service-3.dns-2839.svc.cluster.local]

Mar 28 13:16:51.364: INFO: File wheezy_udp@dns-test-service-3.dns-2839.svc.cluster.local from pod  dns-2839/dns-test-74bf6a00-0705-4f38-8c31-b5d9f883574f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 28 13:16:51.396: INFO: File jessie_udp@dns-test-service-3.dns-2839.svc.cluster.local from pod  dns-2839/dns-test-74bf6a00-0705-4f38-8c31-b5d9f883574f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 28 13:16:51.396: INFO: Lookups using dns-2839/dns-test-74bf6a00-0705-4f38-8c31-b5d9f883574f failed for: [wheezy_udp@dns-test-service-3.dns-2839.svc.cluster.local jessie_udp@dns-test-service-3.dns-2839.svc.cluster.local]

Mar 28 13:16:56.364: INFO: File wheezy_udp@dns-test-service-3.dns-2839.svc.cluster.local from pod  dns-2839/dns-test-74bf6a00-0705-4f38-8c31-b5d9f883574f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 28 13:16:56.396: INFO: File jessie_udp@dns-test-service-3.dns-2839.svc.cluster.local from pod  dns-2839/dns-test-74bf6a00-0705-4f38-8c31-b5d9f883574f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 28 13:16:56.396: INFO: Lookups using dns-2839/dns-test-74bf6a00-0705-4f38-8c31-b5d9f883574f failed for: [wheezy_udp@dns-test-service-3.dns-2839.svc.cluster.local jessie_udp@dns-test-service-3.dns-2839.svc.cluster.local]

Mar 28 13:17:01.364: INFO: File wheezy_udp@dns-test-service-3.dns-2839.svc.cluster.local from pod  dns-2839/dns-test-74bf6a00-0705-4f38-8c31-b5d9f883574f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 28 13:17:01.396: INFO: File jessie_udp@dns-test-service-3.dns-2839.svc.cluster.local from pod  dns-2839/dns-test-74bf6a00-0705-4f38-8c31-b5d9f883574f contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 28 13:17:01.396: INFO: Lookups using dns-2839/dns-test-74bf6a00-0705-4f38-8c31-b5d9f883574f failed for: [wheezy_udp@dns-test-service-3.dns-2839.svc.cluster.local jessie_udp@dns-test-service-3.dns-2839.svc.cluster.local]

Mar 28 13:17:06.398: INFO: DNS probes using dns-test-74bf6a00-0705-4f38-8c31-b5d9f883574f succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2839.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2839.svc.cluster.local; sleep 1; done
... skipping 9 lines ...
STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 28 13:17:08.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2839" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":283,"completed":258,"skipped":4344,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 28 13:17:09.093: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-2eedf419-61b0-4db9-bc46-b73bfda0964d" in namespace "security-context-test-7184" to be "Succeeded or Failed"
Mar 28 13:17:09.124: INFO: Pod "busybox-readonly-false-2eedf419-61b0-4db9-bc46-b73bfda0964d": Phase="Pending", Reason="", readiness=false. Elapsed: 31.067187ms
Mar 28 13:17:11.157: INFO: Pod "busybox-readonly-false-2eedf419-61b0-4db9-bc46-b73bfda0964d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06396293s
Mar 28 13:17:11.157: INFO: Pod "busybox-readonly-false-2eedf419-61b0-4db9-bc46-b73bfda0964d" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 28 13:17:11.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7184" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":283,"completed":259,"skipped":4349,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 28 13:17:11.249: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
Mar 28 13:17:11.373: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 28 13:17:13.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1027" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":283,"completed":260,"skipped":4370,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Mar 28 13:17:16.558: INFO: Successfully updated pod "annotationupdate53cf6872-02ca-4fca-8e7f-027609e1a878"
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 28 13:17:18.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2080" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":283,"completed":261,"skipped":4380,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 5 lines ...
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 28 13:17:20.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6856" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":283,"completed":262,"skipped":4380,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Mar 28 13:17:23.346: INFO: Initial restart count of pod liveness-1e74eebf-aa4e-4259-9d45-57ff1733e05a is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 28 13:21:25.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4149" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":283,"completed":263,"skipped":4395,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-3bac9535-fee4-4cc0-ae26-10d9608f6a28
STEP: Creating a pod to test consume configMaps
Mar 28 13:21:25.406: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d24e9dc5-3ecf-43bb-9549-0364c8100188" in namespace "projected-3306" to be "Succeeded or Failed"
Mar 28 13:21:25.435: INFO: Pod "pod-projected-configmaps-d24e9dc5-3ecf-43bb-9549-0364c8100188": Phase="Pending", Reason="", readiness=false. Elapsed: 29.626077ms
Mar 28 13:21:27.467: INFO: Pod "pod-projected-configmaps-d24e9dc5-3ecf-43bb-9549-0364c8100188": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060966174s
STEP: Saw pod success
Mar 28 13:21:27.467: INFO: Pod "pod-projected-configmaps-d24e9dc5-3ecf-43bb-9549-0364c8100188" satisfied condition "Succeeded or Failed"
Mar 28 13:21:27.497: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-projected-configmaps-d24e9dc5-3ecf-43bb-9549-0364c8100188 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 28 13:21:27.592: INFO: Waiting for pod pod-projected-configmaps-d24e9dc5-3ecf-43bb-9549-0364c8100188 to disappear
Mar 28 13:21:27.623: INFO: Pod pod-projected-configmaps-d24e9dc5-3ecf-43bb-9549-0364c8100188 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 28 13:21:27.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3306" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":283,"completed":264,"skipped":4421,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Service endpoints latency
... skipping 417 lines ...
Mar 28 13:21:38.662: INFO: 99 %ile: 922.403575ms
Mar 28 13:21:38.662: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  test/e2e/framework/framework.go:175
Mar 28 13:21:38.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-2971" for this suite.
•{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":283,"completed":265,"skipped":4451,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 105 lines ...
<a href="btmp">btmp</a>
<a href="ch... (200; 32.744144ms)
[AfterEach] version v1
  test/e2e/framework/framework.go:175
Mar 28 13:21:39.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4509" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":283,"completed":266,"skipped":4474,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 28 lines ...
Mar 28 13:22:57.870: INFO: Terminating ReplicationController wrapped-volume-race-98a34c42-3716-4c73-8fe0-6751da078ac3 pods took: 300.306015ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:175
Mar 28 13:23:13.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8567" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":283,"completed":267,"skipped":4492,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 32 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 28 13:23:22.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8828" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":283,"completed":268,"skipped":4498,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Lease
... skipping 5 lines ...
[It] lease API should be available [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Lease
  test/e2e/framework/framework.go:175
Mar 28 13:23:23.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-1863" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":283,"completed":269,"skipped":4524,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Mar 28 13:23:23.281: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://34.102.168.175:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 28 13:23:23.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3226" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":283,"completed":270,"skipped":4597,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 28 13:23:23.538: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 28 13:23:23.713: INFO: Waiting up to 5m0s for pod "pod-264c3708-8ec1-474a-b09f-f2c9dea4ed36" in namespace "emptydir-4889" to be "Succeeded or Failed"
Mar 28 13:23:23.758: INFO: Pod "pod-264c3708-8ec1-474a-b09f-f2c9dea4ed36": Phase="Pending", Reason="", readiness=false. Elapsed: 45.456263ms
Mar 28 13:23:25.789: INFO: Pod "pod-264c3708-8ec1-474a-b09f-f2c9dea4ed36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.076530459s
STEP: Saw pod success
Mar 28 13:23:25.789: INFO: Pod "pod-264c3708-8ec1-474a-b09f-f2c9dea4ed36" satisfied condition "Succeeded or Failed"
Mar 28 13:23:25.819: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-264c3708-8ec1-474a-b09f-f2c9dea4ed36 container test-container: <nil>
STEP: delete the pod
Mar 28 13:23:25.899: INFO: Waiting for pod pod-264c3708-8ec1-474a-b09f-f2c9dea4ed36 to disappear
Mar 28 13:23:25.930: INFO: Pod pod-264c3708-8ec1-474a-b09f-f2c9dea4ed36 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 28 13:23:25.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4889" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":271,"skipped":4613,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-74d26313-391d-4a3b-8934-8b9a630767e3
STEP: Creating a pod to test consume secrets
Mar 28 13:23:26.223: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c7f5295d-79ec-4505-8d10-6b7dcca9d220" in namespace "projected-2503" to be "Succeeded or Failed"
Mar 28 13:23:26.253: INFO: Pod "pod-projected-secrets-c7f5295d-79ec-4505-8d10-6b7dcca9d220": Phase="Pending", Reason="", readiness=false. Elapsed: 29.899875ms
Mar 28 13:23:28.284: INFO: Pod "pod-projected-secrets-c7f5295d-79ec-4505-8d10-6b7dcca9d220": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060948824s
STEP: Saw pod success
Mar 28 13:23:28.284: INFO: Pod "pod-projected-secrets-c7f5295d-79ec-4505-8d10-6b7dcca9d220" satisfied condition "Succeeded or Failed"
Mar 28 13:23:28.315: INFO: Trying to get logs from node test1-md-0-mr4d9.c.k8s-boskos-gce-project-02.internal pod pod-projected-secrets-c7f5295d-79ec-4505-8d10-6b7dcca9d220 container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 28 13:23:28.398: INFO: Waiting for pod pod-projected-secrets-c7f5295d-79ec-4505-8d10-6b7dcca9d220 to disappear
Mar 28 13:23:28.428: INFO: Pod pod-projected-secrets-c7f5295d-79ec-4505-8d10-6b7dcca9d220 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 28 13:23:28.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2503" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":272,"skipped":4616,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 28 13:23:28.521: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
Mar 28 13:25:28.800: INFO: Deleting pod "var-expansion-1e4ae586-7a23-4fe7-8189-9fd88f70c13b" in namespace "var-expansion-6532"
Mar 28 13:25:28.837: INFO: Wait up to 5m0s for pod "var-expansion-1e4ae586-7a23-4fe7-8189-9fd88f70c13b" to be fully deleted
{"component":"entrypoint","file":"prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","time":"2020-03-28T13:25:32Z"}
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 28 13:25:38.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6532" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":283,"completed":273,"skipped":4626,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Mar 28 13:25:41.866: INFO: Successfully updated pod "labelsupdate6752da59-b7a1-4a94-9ea9-7c216e045f16"
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 28 13:25:45.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4022" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":283,"completed":274,"skipped":4628,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 105 lines ...
<a href="btmp">btmp</a>
<a href="ch... (200; 32.342267ms)
[AfterEach] version v1
  test/e2e/framework/framework.go:175
Mar 28 13:25:46.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4773" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":283,"completed":275,"skipped":4638,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  test/e2e/common/container_probe.go:54
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod busybox-44455376-708a-4b44-97f5-4105162282db in namespace container-probe-8974
{"component":"entrypoint","file":"prow/entrypoint/run.go:245","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","time":"2020-03-28T13:25:48Z"}