This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-03-30 05:39
Elapsed2h0m
Revisionrelease-0.2
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/e9aeb2ec-41b5-496e-a77c-c0c151ae7127/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/e9aeb2ec-41b5-496e-a77c-c0c151ae7127/targets/test

No Test Failures!


Error lines from build-log.txt

... skipping 128 lines ...
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
INFO: Invocation ID: e3c88858-38b6-416b-9297-6193650cbb78
Loading: 
Loading: 0 packages loaded
Loading: 0 packages loaded
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/bazelbuild/rules_go/releases/download/v0.22.2/rules_go-v0.22.2.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/kubernetes/repo-infra/archive/v0.0.3.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
Loading: 0 packages loaded
Loading: 0 packages loaded
Analyzing: 3 targets (3 packages loaded, 0 targets configured)
Analyzing: 3 targets (16 packages loaded, 9 targets configured)
Analyzing: 3 targets (16 packages loaded, 9 targets configured)
Analyzing: 3 targets (16 packages loaded, 9 targets configured)
... skipping 1698 lines ...
    ubuntu-1804:
    ubuntu-1804: TASK [sysprep : Truncate shell history] ****************************************
    ubuntu-1804: ok: [default] => (item={u'path': u'/root/.bash_history'})
    ubuntu-1804: ok: [default] => (item={u'path': u'/home/ubuntu/.bash_history'})
    ubuntu-1804:
    ubuntu-1804: PLAY RECAP *********************************************************************
    ubuntu-1804: default                    : ok=60   changed=46   unreachable=0    failed=0    skipped=72   rescued=0    ignored=0
    ubuntu-1804:
==> ubuntu-1804: Deleting instance...
    ubuntu-1804: Instance has been deleted!
==> ubuntu-1804: Creating image...
==> ubuntu-1804: Deleting disk...
    ubuntu-1804: Disk has been deleted!
... skipping 416 lines ...
node/test1-controlplane-2.c.k8s-jkns-gci-gce-multizone.internal condition met
node/test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal condition met
node/test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal condition met
Conformance test: not doing test setup.
I0330 06:10:12.621381   25071 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
I0330 06:10:12.622434   25071 e2e.go:124] Starting e2e run "d19357e1-598c-4839-a6cc-d46a37e8950e" on Ginkgo node 1
{"msg":"Test Suite starting","total":283,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1585548611 - Will randomize all specs
Will run 283 of 4993 specs

Mar 30 06:10:12.643: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
... skipping 30 lines ...
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:175
Mar 30 06:10:49.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-1397" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":283,"completed":1,"skipped":6,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Mar 30 06:11:20.367: INFO: Restart count of pod container-probe-5894/liveness-6a676c16-0861-403d-856f-845d17e6eded is now 1 (24.401665391s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 30 06:11:20.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5894" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":283,"completed":2,"skipped":30,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 11 lines ...
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 30 06:11:20.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5537" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":283,"completed":3,"skipped":47,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  test/e2e/framework/framework.go:175
Mar 30 06:11:25.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6986" for this suite.
STEP: Destroying namespace "webhook-6986-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":283,"completed":4,"skipped":55,"failed":0}
S
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] 
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
... skipping 19 lines ...
Mar 30 06:13:06.079: INFO: Noticed Pod "taint-eviction-b2" gets evicted.
STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
  test/e2e/framework/framework.go:175
Mar 30 06:13:06.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-multiple-pods-4765" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":283,"completed":5,"skipped":56,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
Mar 30 06:13:20.599: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 30 06:13:23.895: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 06:13:37.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2957" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":283,"completed":6,"skipped":65,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 06:13:37.574: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b35a110f-521a-4f98-9f21-fc1b62c45ab9" in namespace "downward-api-9547" to be "Succeeded or Failed"
Mar 30 06:13:37.606: INFO: Pod "downwardapi-volume-b35a110f-521a-4f98-9f21-fc1b62c45ab9": Phase="Pending", Reason="", readiness=false. Elapsed: 32.293898ms
Mar 30 06:13:39.638: INFO: Pod "downwardapi-volume-b35a110f-521a-4f98-9f21-fc1b62c45ab9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063702311s
Mar 30 06:13:41.669: INFO: Pod "downwardapi-volume-b35a110f-521a-4f98-9f21-fc1b62c45ab9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094656519s
STEP: Saw pod success
Mar 30 06:13:41.669: INFO: Pod "downwardapi-volume-b35a110f-521a-4f98-9f21-fc1b62c45ab9" satisfied condition "Succeeded or Failed"
Mar 30 06:13:41.699: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod downwardapi-volume-b35a110f-521a-4f98-9f21-fc1b62c45ab9 container client-container: <nil>
STEP: delete the pod
Mar 30 06:13:41.789: INFO: Waiting for pod downwardapi-volume-b35a110f-521a-4f98-9f21-fc1b62c45ab9 to disappear
Mar 30 06:13:41.820: INFO: Pod downwardapi-volume-b35a110f-521a-4f98-9f21-fc1b62c45ab9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 30 06:13:41.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9547" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":283,"completed":7,"skipped":72,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 30 06:13:58.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2551" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":283,"completed":8,"skipped":83,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Mar 30 06:14:05.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-983" for this suite.
STEP: Destroying namespace "webhook-983-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":283,"completed":9,"skipped":94,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 06:14:06.240: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar 30 06:14:06.404: INFO: Waiting up to 5m0s for pod "pod-85a0a714-5824-46eb-86dd-05d7ab9912c7" in namespace "emptydir-7903" to be "Succeeded or Failed"
Mar 30 06:14:06.437: INFO: Pod "pod-85a0a714-5824-46eb-86dd-05d7ab9912c7": Phase="Pending", Reason="", readiness=false. Elapsed: 32.965105ms
Mar 30 06:14:08.468: INFO: Pod "pod-85a0a714-5824-46eb-86dd-05d7ab9912c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063502418s
Mar 30 06:14:10.498: INFO: Pod "pod-85a0a714-5824-46eb-86dd-05d7ab9912c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09380379s
STEP: Saw pod success
Mar 30 06:14:10.498: INFO: Pod "pod-85a0a714-5824-46eb-86dd-05d7ab9912c7" satisfied condition "Succeeded or Failed"
Mar 30 06:14:10.527: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-85a0a714-5824-46eb-86dd-05d7ab9912c7 container test-container: <nil>
STEP: delete the pod
Mar 30 06:14:10.617: INFO: Waiting for pod pod-85a0a714-5824-46eb-86dd-05d7ab9912c7 to disappear
Mar 30 06:14:10.645: INFO: Pod pod-85a0a714-5824-46eb-86dd-05d7ab9912c7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 06:14:10.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7903" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":10,"skipped":108,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 11 lines ...
Mar 30 06:14:12.082: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 30 06:14:12.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7264" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":283,"completed":11,"skipped":127,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 62 lines ...
Mar 30 06:14:36.199: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1053/pods","resourceVersion":"2375"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 30 06:14:36.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1053" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":283,"completed":12,"skipped":145,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 30 06:14:47.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1674" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":283,"completed":13,"skipped":146,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-5c7e206a-9f73-4625-ad33-443a21c35fa5
STEP: Creating a pod to test consume configMaps
Mar 30 06:14:48.241: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-390975b1-40c8-40c8-ba13-2c98724a83a8" in namespace "projected-6311" to be "Succeeded or Failed"
Mar 30 06:14:48.274: INFO: Pod "pod-projected-configmaps-390975b1-40c8-40c8-ba13-2c98724a83a8": Phase="Pending", Reason="", readiness=false. Elapsed: 32.425288ms
Mar 30 06:14:50.306: INFO: Pod "pod-projected-configmaps-390975b1-40c8-40c8-ba13-2c98724a83a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064085987s
STEP: Saw pod success
Mar 30 06:14:50.306: INFO: Pod "pod-projected-configmaps-390975b1-40c8-40c8-ba13-2c98724a83a8" satisfied condition "Succeeded or Failed"
Mar 30 06:14:50.336: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-projected-configmaps-390975b1-40c8-40c8-ba13-2c98724a83a8 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 06:14:50.410: INFO: Waiting for pod pod-projected-configmaps-390975b1-40c8-40c8-ba13-2c98724a83a8 to disappear
Mar 30 06:14:50.441: INFO: Pod pod-projected-configmaps-390975b1-40c8-40c8-ba13-2c98724a83a8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 30 06:14:50.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6311" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":14,"skipped":186,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 28 lines ...
Mar 30 06:15:13.381: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 30 06:15:13.634: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 30 06:15:13.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9213" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":283,"completed":15,"skipped":220,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 06:15:18.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-434" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":283,"completed":16,"skipped":266,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 22 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 30 06:15:26.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6648" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":283,"completed":17,"skipped":269,"failed":0}

------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 06:15:26.908: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar 30 06:15:27.068: INFO: Waiting up to 5m0s for pod "pod-aac254e1-ba2b-4f2b-a8d6-1eb3e7456582" in namespace "emptydir-1491" to be "Succeeded or Failed"
Mar 30 06:15:27.098: INFO: Pod "pod-aac254e1-ba2b-4f2b-a8d6-1eb3e7456582": Phase="Pending", Reason="", readiness=false. Elapsed: 30.267579ms
Mar 30 06:15:29.129: INFO: Pod "pod-aac254e1-ba2b-4f2b-a8d6-1eb3e7456582": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060415187s
Mar 30 06:15:31.159: INFO: Pod "pod-aac254e1-ba2b-4f2b-a8d6-1eb3e7456582": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09064913s
STEP: Saw pod success
Mar 30 06:15:31.159: INFO: Pod "pod-aac254e1-ba2b-4f2b-a8d6-1eb3e7456582" satisfied condition "Succeeded or Failed"
Mar 30 06:15:31.188: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod pod-aac254e1-ba2b-4f2b-a8d6-1eb3e7456582 container test-container: <nil>
STEP: delete the pod
Mar 30 06:15:31.276: INFO: Waiting for pod pod-aac254e1-ba2b-4f2b-a8d6-1eb3e7456582 to disappear
Mar 30 06:15:31.307: INFO: Pod pod-aac254e1-ba2b-4f2b-a8d6-1eb3e7456582 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 06:15:31.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1491" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":18,"skipped":269,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 30 06:15:42.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-930" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":283,"completed":19,"skipped":288,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a volume subpath [sig-storage] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Mar 30 06:15:42.904: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [sig-storage] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in volume subpath
Mar 30 06:15:43.063: INFO: Waiting up to 5m0s for pod "var-expansion-6c0da817-8853-4b4f-852e-467f3d1a1502" in namespace "var-expansion-6093" to be "Succeeded or Failed"
Mar 30 06:15:43.094: INFO: Pod "var-expansion-6c0da817-8853-4b4f-852e-467f3d1a1502": Phase="Pending", Reason="", readiness=false. Elapsed: 30.902445ms
Mar 30 06:15:45.127: INFO: Pod "var-expansion-6c0da817-8853-4b4f-852e-467f3d1a1502": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06454977s
STEP: Saw pod success
Mar 30 06:15:45.128: INFO: Pod "var-expansion-6c0da817-8853-4b4f-852e-467f3d1a1502" satisfied condition "Succeeded or Failed"
Mar 30 06:15:45.157: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod var-expansion-6c0da817-8853-4b4f-852e-467f3d1a1502 container dapi-container: <nil>
STEP: delete the pod
Mar 30 06:15:45.233: INFO: Waiting for pod var-expansion-6c0da817-8853-4b4f-852e-467f3d1a1502 to disappear
Mar 30 06:15:45.283: INFO: Pod var-expansion-6c0da817-8853-4b4f-852e-467f3d1a1502 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 30 06:15:45.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6093" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":283,"completed":20,"skipped":322,"failed":0}
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 30 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 30 06:15:50.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1920" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":283,"completed":21,"skipped":324,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 28 lines ...
Mar 30 06:15:58.063: INFO: stderr: ""
Mar 30 06:15:58.063: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 06:15:58.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-112" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":283,"completed":22,"skipped":356,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 17 lines ...
Mar 30 06:16:06.625: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 30 06:16:06.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7936" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":283,"completed":23,"skipped":387,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Mar 30 06:16:07.075: INFO: stderr: ""
Mar 30 06:16:07.075: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.1.107+a329e679223fe1\", GitCommit:\"a329e679223fe1a420d89f5e49f80019ad81a93d\", GitTreeState:\"clean\", BuildDate:\"2020-02-11T14:24:02Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"16\", GitVersion:\"v1.16.1\", GitCommit:\"d647ddbd755faf07169599a625faf302ffc34458\", GitTreeState:\"clean\", BuildDate:\"2019-10-02T16:51:36Z\", GoVersion:\"go1.12.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 06:16:07.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6164" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":283,"completed":24,"skipped":428,"failed":0}
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 30 06:16:17.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1585" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":283,"completed":25,"skipped":433,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 06:16:18.055: INFO: Waiting up to 5m0s for pod "downwardapi-volume-acc4d8ed-23a4-4432-b4a8-1365a019cd78" in namespace "downward-api-3067" to be "Succeeded or Failed"
Mar 30 06:16:18.087: INFO: Pod "downwardapi-volume-acc4d8ed-23a4-4432-b4a8-1365a019cd78": Phase="Pending", Reason="", readiness=false. Elapsed: 31.874561ms
Mar 30 06:16:20.118: INFO: Pod "downwardapi-volume-acc4d8ed-23a4-4432-b4a8-1365a019cd78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06241781s
STEP: Saw pod success
Mar 30 06:16:20.118: INFO: Pod "downwardapi-volume-acc4d8ed-23a4-4432-b4a8-1365a019cd78" satisfied condition "Succeeded or Failed"
Mar 30 06:16:20.147: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod downwardapi-volume-acc4d8ed-23a4-4432-b4a8-1365a019cd78 container client-container: <nil>
STEP: delete the pod
Mar 30 06:16:20.224: INFO: Waiting for pod downwardapi-volume-acc4d8ed-23a4-4432-b4a8-1365a019cd78 to disappear
Mar 30 06:16:20.254: INFO: Pod downwardapi-volume-acc4d8ed-23a4-4432-b4a8-1365a019cd78 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 30 06:16:20.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3067" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":26,"skipped":445,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
Mar 30 06:16:24.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2928" for this suite.
STEP: Destroying namespace "webhook-2928-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":283,"completed":27,"skipped":446,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-25eaae37-6937-4b5f-832b-5aee28ba8133
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 30 06:16:31.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2855" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":28,"skipped":467,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 30 06:16:34.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-732" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":283,"completed":29,"skipped":483,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 30 06:16:34.562: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 30 06:16:34.722: INFO: Waiting up to 5m0s for pod "downward-api-4eec8f53-0dfa-49f4-955e-3cc042b4c915" in namespace "downward-api-8304" to be "Succeeded or Failed"
Mar 30 06:16:34.751: INFO: Pod "downward-api-4eec8f53-0dfa-49f4-955e-3cc042b4c915": Phase="Pending", Reason="", readiness=false. Elapsed: 29.524569ms
Mar 30 06:16:36.782: INFO: Pod "downward-api-4eec8f53-0dfa-49f4-955e-3cc042b4c915": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060089757s
STEP: Saw pod success
Mar 30 06:16:36.782: INFO: Pod "downward-api-4eec8f53-0dfa-49f4-955e-3cc042b4c915" satisfied condition "Succeeded or Failed"
Mar 30 06:16:36.812: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod downward-api-4eec8f53-0dfa-49f4-955e-3cc042b4c915 container dapi-container: <nil>
STEP: delete the pod
Mar 30 06:16:36.892: INFO: Waiting for pod downward-api-4eec8f53-0dfa-49f4-955e-3cc042b4c915 to disappear
Mar 30 06:16:36.922: INFO: Pod downward-api-4eec8f53-0dfa-49f4-955e-3cc042b4c915 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 30 06:16:36.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8304" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":283,"completed":30,"skipped":495,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 30 06:16:53.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1702" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":283,"completed":31,"skipped":535,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 06:16:53.668: INFO: Waiting up to 5m0s for pod "downwardapi-volume-64dfb2d5-d5ad-4d9f-97b6-df7ede33bf1a" in namespace "downward-api-6368" to be "Succeeded or Failed"
Mar 30 06:16:53.698: INFO: Pod "downwardapi-volume-64dfb2d5-d5ad-4d9f-97b6-df7ede33bf1a": Phase="Pending", Reason="", readiness=false. Elapsed: 29.969327ms
Mar 30 06:16:55.728: INFO: Pod "downwardapi-volume-64dfb2d5-d5ad-4d9f-97b6-df7ede33bf1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059831505s
STEP: Saw pod success
Mar 30 06:16:55.728: INFO: Pod "downwardapi-volume-64dfb2d5-d5ad-4d9f-97b6-df7ede33bf1a" satisfied condition "Succeeded or Failed"
Mar 30 06:16:55.757: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod downwardapi-volume-64dfb2d5-d5ad-4d9f-97b6-df7ede33bf1a container client-container: <nil>
STEP: delete the pod
Mar 30 06:16:55.851: INFO: Waiting for pod downwardapi-volume-64dfb2d5-d5ad-4d9f-97b6-df7ede33bf1a to disappear
Mar 30 06:16:55.882: INFO: Pod downwardapi-volume-64dfb2d5-d5ad-4d9f-97b6-df7ede33bf1a no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 30 06:16:55.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6368" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":283,"completed":32,"skipped":559,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 26 lines ...
Mar 30 06:17:14.845: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 30 06:17:15.123: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 30 06:17:15.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6461" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":283,"completed":33,"skipped":629,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 19 lines ...
Mar 30 06:17:25.695: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar 30 06:17:25.727: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 30 06:17:25.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2534" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":283,"completed":34,"skipped":666,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-00193f61-c0fa-4f3a-a99f-442a98b77c1b
STEP: Creating a pod to test consume secrets
Mar 30 06:17:26.154: INFO: Waiting up to 5m0s for pod "pod-secrets-965e6ed1-86bc-47a1-9d8a-89bf61f28d55" in namespace "secrets-586" to be "Succeeded or Failed"
Mar 30 06:17:26.187: INFO: Pod "pod-secrets-965e6ed1-86bc-47a1-9d8a-89bf61f28d55": Phase="Pending", Reason="", readiness=false. Elapsed: 32.555311ms
Mar 30 06:17:28.217: INFO: Pod "pod-secrets-965e6ed1-86bc-47a1-9d8a-89bf61f28d55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062336554s
STEP: Saw pod success
Mar 30 06:17:28.217: INFO: Pod "pod-secrets-965e6ed1-86bc-47a1-9d8a-89bf61f28d55" satisfied condition "Succeeded or Failed"
Mar 30 06:17:28.246: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-secrets-965e6ed1-86bc-47a1-9d8a-89bf61f28d55 container secret-volume-test: <nil>
STEP: delete the pod
Mar 30 06:17:28.323: INFO: Waiting for pod pod-secrets-965e6ed1-86bc-47a1-9d8a-89bf61f28d55 to disappear
Mar 30 06:17:28.354: INFO: Pod pod-secrets-965e6ed1-86bc-47a1-9d8a-89bf61f28d55 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 30 06:17:28.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-586" for this suite.
STEP: Destroying namespace "secret-namespace-8079" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":283,"completed":35,"skipped":717,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Mar 30 06:17:30.760: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 30 06:17:30.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6689" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":283,"completed":36,"skipped":751,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 06:17:31.090: INFO: Waiting up to 5m0s for pod "downwardapi-volume-991329f5-0d9a-49bb-91e2-49576a247eb4" in namespace "projected-5801" to be "Succeeded or Failed"
Mar 30 06:17:31.125: INFO: Pod "downwardapi-volume-991329f5-0d9a-49bb-91e2-49576a247eb4": Phase="Pending", Reason="", readiness=false. Elapsed: 35.338113ms
Mar 30 06:17:33.155: INFO: Pod "downwardapi-volume-991329f5-0d9a-49bb-91e2-49576a247eb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06542456s
STEP: Saw pod success
Mar 30 06:17:33.155: INFO: Pod "downwardapi-volume-991329f5-0d9a-49bb-91e2-49576a247eb4" satisfied condition "Succeeded or Failed"
Mar 30 06:17:33.186: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod downwardapi-volume-991329f5-0d9a-49bb-91e2-49576a247eb4 container client-container: <nil>
STEP: delete the pod
Mar 30 06:17:33.264: INFO: Waiting for pod downwardapi-volume-991329f5-0d9a-49bb-91e2-49576a247eb4 to disappear
Mar 30 06:17:33.294: INFO: Pod downwardapi-volume-991329f5-0d9a-49bb-91e2-49576a247eb4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 30 06:17:33.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5801" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":283,"completed":37,"skipped":764,"failed":0}
SSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...
Mar 30 06:17:35.880: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 30 06:17:36.139: INFO: Deleting pod dns-8832...
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 30 06:17:36.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8832" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":283,"completed":38,"skipped":767,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Mar 30 06:17:36.409: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 30 06:17:39.700: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 06:17:53.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5421" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":283,"completed":39,"skipped":768,"failed":0}
S
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 30 06:17:53.764: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-77fd9fed-73f9-48cd-ac4c-ccd76456a8e6" in namespace "security-context-test-979" to be "Succeeded or Failed"
Mar 30 06:17:53.794: INFO: Pod "busybox-privileged-false-77fd9fed-73f9-48cd-ac4c-ccd76456a8e6": Phase="Pending", Reason="", readiness=false. Elapsed: 30.251011ms
Mar 30 06:17:55.824: INFO: Pod "busybox-privileged-false-77fd9fed-73f9-48cd-ac4c-ccd76456a8e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060506173s
Mar 30 06:17:55.825: INFO: Pod "busybox-privileged-false-77fd9fed-73f9-48cd-ac4c-ccd76456a8e6" satisfied condition "Succeeded or Failed"
Mar 30 06:17:55.865: INFO: Got logs for pod "busybox-privileged-false-77fd9fed-73f9-48cd-ac4c-ccd76456a8e6": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 30 06:17:55.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-979" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":40,"skipped":769,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
Mar 30 06:18:02.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3211" for this suite.
STEP: Destroying namespace "webhook-3211-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":283,"completed":41,"skipped":787,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 141 lines ...
Mar 30 06:18:24.277: INFO: stderr: ""
Mar 30 06:18:24.277: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 06:18:24.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7045" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":283,"completed":42,"skipped":815,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 30 06:18:24.367: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 30 06:18:24.541: INFO: Waiting up to 5m0s for pod "downward-api-0392cc57-96e7-4498-8e48-b5a5ccfc05e6" in namespace "downward-api-9169" to be "Succeeded or Failed"
Mar 30 06:18:24.574: INFO: Pod "downward-api-0392cc57-96e7-4498-8e48-b5a5ccfc05e6": Phase="Pending", Reason="", readiness=false. Elapsed: 32.52914ms
Mar 30 06:18:26.603: INFO: Pod "downward-api-0392cc57-96e7-4498-8e48-b5a5ccfc05e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062279818s
STEP: Saw pod success
Mar 30 06:18:26.603: INFO: Pod "downward-api-0392cc57-96e7-4498-8e48-b5a5ccfc05e6" satisfied condition "Succeeded or Failed"
Mar 30 06:18:26.633: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod downward-api-0392cc57-96e7-4498-8e48-b5a5ccfc05e6 container dapi-container: <nil>
STEP: delete the pod
Mar 30 06:18:26.709: INFO: Waiting for pod downward-api-0392cc57-96e7-4498-8e48-b5a5ccfc05e6 to disappear
Mar 30 06:18:26.738: INFO: Pod downward-api-0392cc57-96e7-4498-8e48-b5a5ccfc05e6 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 30 06:18:26.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9169" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":283,"completed":43,"skipped":830,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-76d31819-58be-41a7-bf0d-fea8258f35b7
STEP: Creating a pod to test consume configMaps
Mar 30 06:18:27.018: INFO: Waiting up to 5m0s for pod "pod-configmaps-10e59652-159e-47e8-ba11-621f79ba1a26" in namespace "configmap-6079" to be "Succeeded or Failed"
Mar 30 06:18:27.056: INFO: Pod "pod-configmaps-10e59652-159e-47e8-ba11-621f79ba1a26": Phase="Pending", Reason="", readiness=false. Elapsed: 38.289557ms
Mar 30 06:18:29.088: INFO: Pod "pod-configmaps-10e59652-159e-47e8-ba11-621f79ba1a26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.07071212s
STEP: Saw pod success
Mar 30 06:18:29.088: INFO: Pod "pod-configmaps-10e59652-159e-47e8-ba11-621f79ba1a26" satisfied condition "Succeeded or Failed"
Mar 30 06:18:29.119: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-configmaps-10e59652-159e-47e8-ba11-621f79ba1a26 container configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 06:18:29.195: INFO: Waiting for pod pod-configmaps-10e59652-159e-47e8-ba11-621f79ba1a26 to disappear
Mar 30 06:18:29.225: INFO: Pod pod-configmaps-10e59652-159e-47e8-ba11-621f79ba1a26 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 06:18:29.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6079" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":44,"skipped":843,"failed":0}

------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 30 06:18:29.440: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 06:18:29.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4343" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":283,"completed":45,"skipped":843,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 06:18:29.931: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
Mar 30 06:18:30.338: INFO: Waiting up to 5m0s for pod "pod-d702b2db-ca54-471b-ba2b-dcc13522fd97" in namespace "emptydir-8045" to be "Succeeded or Failed"
Mar 30 06:18:30.367: INFO: Pod "pod-d702b2db-ca54-471b-ba2b-dcc13522fd97": Phase="Pending", Reason="", readiness=false. Elapsed: 29.225808ms
Mar 30 06:18:32.397: INFO: Pod "pod-d702b2db-ca54-471b-ba2b-dcc13522fd97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058926395s
STEP: Saw pod success
Mar 30 06:18:32.397: INFO: Pod "pod-d702b2db-ca54-471b-ba2b-dcc13522fd97" satisfied condition "Succeeded or Failed"
Mar 30 06:18:32.427: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod pod-d702b2db-ca54-471b-ba2b-dcc13522fd97 container test-container: <nil>
STEP: delete the pod
Mar 30 06:18:32.508: INFO: Waiting for pod pod-d702b2db-ca54-471b-ba2b-dcc13522fd97 to disappear
Mar 30 06:18:32.540: INFO: Pod pod-d702b2db-ca54-471b-ba2b-dcc13522fd97 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 06:18:32.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8045" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":46,"skipped":849,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Mar 30 06:18:37.481: INFO: Successfully updated pod "pod-update-activedeadlineseconds-28395a18-c203-443f-8799-967c505d42ef"
Mar 30 06:18:37.481: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-28395a18-c203-443f-8799-967c505d42ef" in namespace "pods-4730" to be "terminated due to deadline exceeded"
Mar 30 06:18:37.511: INFO: Pod "pod-update-activedeadlineseconds-28395a18-c203-443f-8799-967c505d42ef": Phase="Running", Reason="", readiness=true. Elapsed: 29.896999ms
Mar 30 06:18:39.543: INFO: Pod "pod-update-activedeadlineseconds-28395a18-c203-443f-8799-967c505d42ef": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.061493209s
Mar 30 06:18:39.543: INFO: Pod "pod-update-activedeadlineseconds-28395a18-c203-443f-8799-967c505d42ef" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 30 06:18:39.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4730" for this suite.
•{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":283,"completed":47,"skipped":870,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
... skipping 18 lines ...
STEP: Deleting second CR
Mar 30 06:19:30.230: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-30T06:18:50Z generation:2 name:name2 resourceVersion:4480 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4d57ea9b-01bf-439d-a93e-74a4fef7a38a] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 06:19:40.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-3511" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":283,"completed":48,"skipped":888,"failed":0}
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 105 lines ...
<a href="btmp">btmp</a>
<a href="ch... (200; 59.984309ms)
[AfterEach] version v1
  test/e2e/framework/framework.go:175
Mar 30 06:19:41.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3869" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":283,"completed":49,"skipped":890,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 30 06:19:41.615: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name secret-emptykey-test-6229687d-8bc7-40ae-b9ce-8d96bc55c548
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Mar 30 06:19:41.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-765" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":283,"completed":50,"skipped":907,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 39 lines ...
Mar 30 06:19:49.332: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig explain e2e-test-crd-publish-openapi-7411-crds.spec'
Mar 30 06:19:49.658: INFO: stderr: ""
Mar 30 06:19:49.658: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7411-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Mar 30 06:19:49.658: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig explain e2e-test-crd-publish-openapi-7411-crds.spec.bars'
Mar 30 06:19:49.993: INFO: stderr: ""
Mar 30 06:19:49.993: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7411-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Mar 30 06:19:49.994: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig explain e2e-test-crd-publish-openapi-7411-crds.spec.bars2'
Mar 30 06:19:50.421: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 06:19:53.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6833" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":283,"completed":51,"skipped":943,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 5 lines ...
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 30 06:19:56.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1410" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":283,"completed":52,"skipped":957,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 06:19:56.323: INFO: Waiting up to 5m0s for pod "downwardapi-volume-49228bb5-fd2b-4009-b4c2-dac0a1dcbbe0" in namespace "projected-9867" to be "Succeeded or Failed"
Mar 30 06:19:56.358: INFO: Pod "downwardapi-volume-49228bb5-fd2b-4009-b4c2-dac0a1dcbbe0": Phase="Pending", Reason="", readiness=false. Elapsed: 35.338793ms
Mar 30 06:19:58.398: INFO: Pod "downwardapi-volume-49228bb5-fd2b-4009-b4c2-dac0a1dcbbe0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.074784559s
STEP: Saw pod success
Mar 30 06:19:58.398: INFO: Pod "downwardapi-volume-49228bb5-fd2b-4009-b4c2-dac0a1dcbbe0" satisfied condition "Succeeded or Failed"
Mar 30 06:19:58.428: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod downwardapi-volume-49228bb5-fd2b-4009-b4c2-dac0a1dcbbe0 container client-container: <nil>
STEP: delete the pod
Mar 30 06:19:58.508: INFO: Waiting for pod downwardapi-volume-49228bb5-fd2b-4009-b4c2-dac0a1dcbbe0 to disappear
Mar 30 06:19:58.539: INFO: Pod downwardapi-volume-49228bb5-fd2b-4009-b4c2-dac0a1dcbbe0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 30 06:19:58.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9867" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":283,"completed":53,"skipped":971,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Mar 30 06:20:04.171: INFO: stderr: ""
Mar 30 06:20:04.171: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5146-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t<string>\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t<string>\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t<Object>\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t<map[string]>\n     Specification of Waldo\n\n   status\t<Object>\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 06:20:07.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2042" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":283,"completed":54,"skipped":982,"failed":0}
SSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 14 lines ...
STEP: verifying the updated pod is in kubernetes
Mar 30 06:20:10.481: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 30 06:20:10.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9228" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":283,"completed":55,"skipped":987,"failed":0}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 66 lines ...
Mar 30 06:22:34.062: INFO: Waiting for statefulset status.replicas updated to 0
Mar 30 06:22:34.091: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 30 06:22:34.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1279" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":283,"completed":56,"skipped":991,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 38 lines ...
• [SLOW TEST:304.984 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":283,"completed":57,"skipped":1016,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 29 lines ...
Mar 30 06:28:05.217: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 30 06:28:06.468: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 30 06:28:06.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9051" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":58,"skipped":1039,"failed":0}
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 36 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 30 06:28:08.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5931" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":283,"completed":59,"skipped":1042,"failed":0}

------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 06:28:08.212: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 30 06:28:08.391: INFO: Waiting up to 5m0s for pod "pod-58021f96-0f5a-43e1-a0f6-34d279697c7d" in namespace "emptydir-3140" to be "Succeeded or Failed"
Mar 30 06:28:08.422: INFO: Pod "pod-58021f96-0f5a-43e1-a0f6-34d279697c7d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.151083ms
Mar 30 06:28:10.451: INFO: Pod "pod-58021f96-0f5a-43e1-a0f6-34d279697c7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059786514s
STEP: Saw pod success
Mar 30 06:28:10.451: INFO: Pod "pod-58021f96-0f5a-43e1-a0f6-34d279697c7d" satisfied condition "Succeeded or Failed"
Mar 30 06:28:10.481: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod pod-58021f96-0f5a-43e1-a0f6-34d279697c7d container test-container: <nil>
STEP: delete the pod
Mar 30 06:28:10.558: INFO: Waiting for pod pod-58021f96-0f5a-43e1-a0f6-34d279697c7d to disappear
Mar 30 06:28:10.589: INFO: Pod pod-58021f96-0f5a-43e1-a0f6-34d279697c7d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 06:28:10.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3140" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":60,"skipped":1042,"failed":0}

------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 12 lines ...
Mar 30 06:28:15.087: INFO: Terminating Job.batch foo pods took: 2.100310041s
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Mar 30 06:28:56.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9289" for this suite.
•{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":283,"completed":61,"skipped":1042,"failed":0}
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-downwardapi-nv2j
STEP: Creating a pod to test atomic-volume-subpath
Mar 30 06:28:56.428: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-nv2j" in namespace "subpath-4921" to be "Succeeded or Failed"
Mar 30 06:28:56.457: INFO: Pod "pod-subpath-test-downwardapi-nv2j": Phase="Pending", Reason="", readiness=false. Elapsed: 29.007393ms
Mar 30 06:28:58.487: INFO: Pod "pod-subpath-test-downwardapi-nv2j": Phase="Running", Reason="", readiness=true. Elapsed: 2.058432461s
Mar 30 06:29:00.517: INFO: Pod "pod-subpath-test-downwardapi-nv2j": Phase="Running", Reason="", readiness=true. Elapsed: 4.088358274s
Mar 30 06:29:02.546: INFO: Pod "pod-subpath-test-downwardapi-nv2j": Phase="Running", Reason="", readiness=true. Elapsed: 6.117703134s
Mar 30 06:29:04.577: INFO: Pod "pod-subpath-test-downwardapi-nv2j": Phase="Running", Reason="", readiness=true. Elapsed: 8.148229934s
Mar 30 06:29:06.606: INFO: Pod "pod-subpath-test-downwardapi-nv2j": Phase="Running", Reason="", readiness=true. Elapsed: 10.177713974s
Mar 30 06:29:08.636: INFO: Pod "pod-subpath-test-downwardapi-nv2j": Phase="Running", Reason="", readiness=true. Elapsed: 12.207803732s
Mar 30 06:29:10.666: INFO: Pod "pod-subpath-test-downwardapi-nv2j": Phase="Running", Reason="", readiness=true. Elapsed: 14.237483422s
Mar 30 06:29:12.695: INFO: Pod "pod-subpath-test-downwardapi-nv2j": Phase="Running", Reason="", readiness=true. Elapsed: 16.266805448s
Mar 30 06:29:14.725: INFO: Pod "pod-subpath-test-downwardapi-nv2j": Phase="Running", Reason="", readiness=true. Elapsed: 18.296363796s
Mar 30 06:29:16.754: INFO: Pod "pod-subpath-test-downwardapi-nv2j": Phase="Running", Reason="", readiness=true. Elapsed: 20.325876653s
Mar 30 06:29:18.784: INFO: Pod "pod-subpath-test-downwardapi-nv2j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.35562561s
STEP: Saw pod success
Mar 30 06:29:18.784: INFO: Pod "pod-subpath-test-downwardapi-nv2j" satisfied condition "Succeeded or Failed"
Mar 30 06:29:18.814: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-subpath-test-downwardapi-nv2j container test-container-subpath-downwardapi-nv2j: <nil>
STEP: delete the pod
Mar 30 06:29:18.892: INFO: Waiting for pod pod-subpath-test-downwardapi-nv2j to disappear
Mar 30 06:29:18.922: INFO: Pod pod-subpath-test-downwardapi-nv2j no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-nv2j
Mar 30 06:29:18.922: INFO: Deleting pod "pod-subpath-test-downwardapi-nv2j" in namespace "subpath-4921"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 30 06:29:18.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4921" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":283,"completed":62,"skipped":1044,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Mar 30 06:29:23.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7267" for this suite.
STEP: Destroying namespace "webhook-7267-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":283,"completed":63,"skipped":1081,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] HostPath
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test hostPath mode
Mar 30 06:29:23.963: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6680" to be "Succeeded or Failed"
Mar 30 06:29:23.992: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 29.233896ms
Mar 30 06:29:26.024: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061242697s
STEP: Saw pod success
Mar 30 06:29:26.024: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Mar 30 06:29:26.053: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Mar 30 06:29:26.124: INFO: Waiting for pod pod-host-path-test to disappear
Mar 30 06:29:26.154: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  test/e2e/framework/framework.go:175
Mar 30 06:29:26.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-6680" for this suite.
•{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":64,"skipped":1108,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-7a3ef3b4-eabd-4d29-b4a7-2b88f7d64e88
STEP: Creating a pod to test consume secrets
Mar 30 06:29:26.437: INFO: Waiting up to 5m0s for pod "pod-secrets-a6a12534-c233-43d6-ae53-26008f79fece" in namespace "secrets-9852" to be "Succeeded or Failed"
Mar 30 06:29:26.467: INFO: Pod "pod-secrets-a6a12534-c233-43d6-ae53-26008f79fece": Phase="Pending", Reason="", readiness=false. Elapsed: 29.217119ms
Mar 30 06:29:28.496: INFO: Pod "pod-secrets-a6a12534-c233-43d6-ae53-26008f79fece": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058804546s
STEP: Saw pod success
Mar 30 06:29:28.496: INFO: Pod "pod-secrets-a6a12534-c233-43d6-ae53-26008f79fece" satisfied condition "Succeeded or Failed"
Mar 30 06:29:28.526: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod pod-secrets-a6a12534-c233-43d6-ae53-26008f79fece container secret-volume-test: <nil>
STEP: delete the pod
Mar 30 06:29:28.603: INFO: Waiting for pod pod-secrets-a6a12534-c233-43d6-ae53-26008f79fece to disappear
Mar 30 06:29:28.632: INFO: Pod pod-secrets-a6a12534-c233-43d6-ae53-26008f79fece no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 30 06:29:28.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9852" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":65,"skipped":1109,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-2e367da5-f386-48f3-aa5b-366ec683fd3c
STEP: Creating a pod to test consume configMaps
Mar 30 06:29:28.927: INFO: Waiting up to 5m0s for pod "pod-configmaps-6ff16c17-1902-498a-94f2-4107b5136905" in namespace "configmap-2710" to be "Succeeded or Failed"
Mar 30 06:29:28.959: INFO: Pod "pod-configmaps-6ff16c17-1902-498a-94f2-4107b5136905": Phase="Pending", Reason="", readiness=false. Elapsed: 31.492113ms
Mar 30 06:29:30.991: INFO: Pod "pod-configmaps-6ff16c17-1902-498a-94f2-4107b5136905": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063813317s
STEP: Saw pod success
Mar 30 06:29:30.991: INFO: Pod "pod-configmaps-6ff16c17-1902-498a-94f2-4107b5136905" satisfied condition "Succeeded or Failed"
Mar 30 06:29:31.021: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-configmaps-6ff16c17-1902-498a-94f2-4107b5136905 container configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 06:29:31.094: INFO: Waiting for pod pod-configmaps-6ff16c17-1902-498a-94f2-4107b5136905 to disappear
Mar 30 06:29:31.124: INFO: Pod pod-configmaps-6ff16c17-1902-498a-94f2-4107b5136905 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 06:29:31.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2710" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":283,"completed":66,"skipped":1150,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 337 lines ...
Mar 30 06:29:36.714: INFO: Deleting ReplicationController proxy-service-thpjl took: 35.809646ms
Mar 30 06:29:36.815: INFO: Terminating ReplicationController proxy-service-thpjl pods took: 100.631091ms
[AfterEach] version v1
  test/e2e/framework/framework.go:175
Mar 30 06:29:38.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4145" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":283,"completed":67,"skipped":1175,"failed":0}
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 39 lines ...
Mar 30 06:31:00.034: INFO: Waiting for statefulset status.replicas updated to 0
Mar 30 06:31:00.064: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 30 06:31:00.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7624" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":283,"completed":68,"skipped":1181,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 28 lines ...
Mar 30 06:31:23.179: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 30 06:31:23.426: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 30 06:31:23.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1705" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":69,"skipped":1217,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 190 lines ...
Mar 30 06:31:38.186: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 30 06:31:38.186: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 06:31:38.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5886" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":283,"completed":70,"skipped":1221,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating secret secrets-55/secret-test-adf36833-8744-4021-911f-033ee7eb5a11
STEP: Creating a pod to test consume secrets
Mar 30 06:31:38.463: INFO: Waiting up to 5m0s for pod "pod-configmaps-115e4232-1ad1-491c-b7f7-99f15f59787c" in namespace "secrets-55" to be "Succeeded or Failed"
Mar 30 06:31:38.496: INFO: Pod "pod-configmaps-115e4232-1ad1-491c-b7f7-99f15f59787c": Phase="Pending", Reason="", readiness=false. Elapsed: 32.313195ms
Mar 30 06:31:40.525: INFO: Pod "pod-configmaps-115e4232-1ad1-491c-b7f7-99f15f59787c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061431833s
STEP: Saw pod success
Mar 30 06:31:40.525: INFO: Pod "pod-configmaps-115e4232-1ad1-491c-b7f7-99f15f59787c" satisfied condition "Succeeded or Failed"
Mar 30 06:31:40.554: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod pod-configmaps-115e4232-1ad1-491c-b7f7-99f15f59787c container env-test: <nil>
STEP: delete the pod
Mar 30 06:31:40.642: INFO: Waiting for pod pod-configmaps-115e4232-1ad1-491c-b7f7-99f15f59787c to disappear
Mar 30 06:31:40.672: INFO: Pod pod-configmaps-115e4232-1ad1-491c-b7f7-99f15f59787c no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Mar 30 06:31:40.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-55" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":283,"completed":71,"skipped":1241,"failed":0}
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 75 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 30 06:31:44.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2285" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":283,"completed":72,"skipped":1249,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-23186229-5fbd-43bb-b4d5-fa4fa7d72e56
STEP: Creating a pod to test consume configMaps
Mar 30 06:31:45.078: INFO: Waiting up to 5m0s for pod "pod-configmaps-2e95f6f7-3535-49e9-ad50-32a16ea4e5a8" in namespace "configmap-1663" to be "Succeeded or Failed"
Mar 30 06:31:45.107: INFO: Pod "pod-configmaps-2e95f6f7-3535-49e9-ad50-32a16ea4e5a8": Phase="Pending", Reason="", readiness=false. Elapsed: 29.187791ms
Mar 30 06:31:47.137: INFO: Pod "pod-configmaps-2e95f6f7-3535-49e9-ad50-32a16ea4e5a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058762965s
STEP: Saw pod success
Mar 30 06:31:47.137: INFO: Pod "pod-configmaps-2e95f6f7-3535-49e9-ad50-32a16ea4e5a8" satisfied condition "Succeeded or Failed"
Mar 30 06:31:47.166: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-configmaps-2e95f6f7-3535-49e9-ad50-32a16ea4e5a8 container configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 06:31:47.239: INFO: Waiting for pod pod-configmaps-2e95f6f7-3535-49e9-ad50-32a16ea4e5a8 to disappear
Mar 30 06:31:47.269: INFO: Pod pod-configmaps-2e95f6f7-3535-49e9-ad50-32a16ea4e5a8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 06:31:47.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1663" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":73,"skipped":1251,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Mar 30 06:31:47.358: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override command
Mar 30 06:31:47.518: INFO: Waiting up to 5m0s for pod "client-containers-b2b91157-c112-4c20-965a-7aed86b40ef8" in namespace "containers-310" to be "Succeeded or Failed"
Mar 30 06:31:47.548: INFO: Pod "client-containers-b2b91157-c112-4c20-965a-7aed86b40ef8": Phase="Pending", Reason="", readiness=false. Elapsed: 29.537696ms
Mar 30 06:31:49.577: INFO: Pod "client-containers-b2b91157-c112-4c20-965a-7aed86b40ef8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.05863418s
STEP: Saw pod success
Mar 30 06:31:49.577: INFO: Pod "client-containers-b2b91157-c112-4c20-965a-7aed86b40ef8" satisfied condition "Succeeded or Failed"
Mar 30 06:31:49.606: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod client-containers-b2b91157-c112-4c20-965a-7aed86b40ef8 container test-container: <nil>
STEP: delete the pod
Mar 30 06:31:49.681: INFO: Waiting for pod client-containers-b2b91157-c112-4c20-965a-7aed86b40ef8 to disappear
Mar 30 06:31:49.711: INFO: Pod client-containers-b2b91157-c112-4c20-965a-7aed86b40ef8 no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 30 06:31:49.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-310" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":283,"completed":74,"skipped":1264,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 06:31:54.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-6724" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":283,"completed":75,"skipped":1288,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 06:31:55.417: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb6c48b3-fc43-45d7-ae57-e6f7bf9ab8ff" in namespace "downward-api-4124" to be "Succeeded or Failed"
Mar 30 06:31:55.457: INFO: Pod "downwardapi-volume-bb6c48b3-fc43-45d7-ae57-e6f7bf9ab8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 40.091937ms
Mar 30 06:31:57.486: INFO: Pod "downwardapi-volume-bb6c48b3-fc43-45d7-ae57-e6f7bf9ab8ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.069290327s
STEP: Saw pod success
Mar 30 06:31:57.486: INFO: Pod "downwardapi-volume-bb6c48b3-fc43-45d7-ae57-e6f7bf9ab8ff" satisfied condition "Succeeded or Failed"
Mar 30 06:31:57.515: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod downwardapi-volume-bb6c48b3-fc43-45d7-ae57-e6f7bf9ab8ff container client-container: <nil>
STEP: delete the pod
Mar 30 06:31:57.589: INFO: Waiting for pod downwardapi-volume-bb6c48b3-fc43-45d7-ae57-e6f7bf9ab8ff to disappear
Mar 30 06:31:57.619: INFO: Pod downwardapi-volume-bb6c48b3-fc43-45d7-ae57-e6f7bf9ab8ff no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 30 06:31:57.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4124" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":283,"completed":76,"skipped":1307,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 06:31:57.707: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar 30 06:31:57.867: INFO: Waiting up to 5m0s for pod "pod-f3590516-e9dc-412e-b8c6-93cbb79f9f8e" in namespace "emptydir-4544" to be "Succeeded or Failed"
Mar 30 06:31:57.897: INFO: Pod "pod-f3590516-e9dc-412e-b8c6-93cbb79f9f8e": Phase="Pending", Reason="", readiness=false. Elapsed: 29.999994ms
Mar 30 06:31:59.927: INFO: Pod "pod-f3590516-e9dc-412e-b8c6-93cbb79f9f8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059705002s
STEP: Saw pod success
Mar 30 06:31:59.927: INFO: Pod "pod-f3590516-e9dc-412e-b8c6-93cbb79f9f8e" satisfied condition "Succeeded or Failed"
Mar 30 06:31:59.956: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod pod-f3590516-e9dc-412e-b8c6-93cbb79f9f8e container test-container: <nil>
STEP: delete the pod
Mar 30 06:32:00.037: INFO: Waiting for pod pod-f3590516-e9dc-412e-b8c6-93cbb79f9f8e to disappear
Mar 30 06:32:00.067: INFO: Pod pod-f3590516-e9dc-412e-b8c6-93cbb79f9f8e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 06:32:00.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4544" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":77,"skipped":1317,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-618b489d-11b2-4d9f-958a-1fbf9de4b92f
STEP: Creating a pod to test consume secrets
Mar 30 06:32:00.351: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d5915e20-a345-4305-9c67-e0945e3c54af" in namespace "projected-1757" to be "Succeeded or Failed"
Mar 30 06:32:00.380: INFO: Pod "pod-projected-secrets-d5915e20-a345-4305-9c67-e0945e3c54af": Phase="Pending", Reason="", readiness=false. Elapsed: 29.379262ms
Mar 30 06:32:02.410: INFO: Pod "pod-projected-secrets-d5915e20-a345-4305-9c67-e0945e3c54af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058727922s
STEP: Saw pod success
Mar 30 06:32:02.410: INFO: Pod "pod-projected-secrets-d5915e20-a345-4305-9c67-e0945e3c54af" satisfied condition "Succeeded or Failed"
Mar 30 06:32:02.439: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod pod-projected-secrets-d5915e20-a345-4305-9c67-e0945e3c54af container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 30 06:32:02.514: INFO: Waiting for pod pod-projected-secrets-d5915e20-a345-4305-9c67-e0945e3c54af to disappear
Mar 30 06:32:02.544: INFO: Pod pod-projected-secrets-d5915e20-a345-4305-9c67-e0945e3c54af no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 30 06:32:02.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1757" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":78,"skipped":1347,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 30 06:32:02.632: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:135
[It] should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Mar 30 06:32:02.987: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 30 06:32:02.987: INFO: DaemonSet pods can't tolerate node test1-controlplane-1.c.k8s-jkns-gci-gce-multizone.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 30 06:32:02.987: INFO: DaemonSet pods can't tolerate node test1-controlplane-2.c.k8s-jkns-gci-gce-multizone.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
... skipping 6 lines ...
Mar 30 06:32:04.102: INFO: Node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal is running more than one daemon pod
Mar 30 06:32:05.073: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 30 06:32:05.073: INFO: DaemonSet pods can't tolerate node test1-controlplane-1.c.k8s-jkns-gci-gce-multizone.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 30 06:32:05.073: INFO: DaemonSet pods can't tolerate node test1-controlplane-2.c.k8s-jkns-gci-gce-multizone.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 30 06:32:05.103: INFO: Number of nodes with available pods: 2
Mar 30 06:32:05.103: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Mar 30 06:32:05.225: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 30 06:32:05.226: INFO: DaemonSet pods can't tolerate node test1-controlplane-1.c.k8s-jkns-gci-gce-multizone.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 30 06:32:05.226: INFO: DaemonSet pods can't tolerate node test1-controlplane-2.c.k8s-jkns-gci-gce-multizone.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 30 06:32:05.259: INFO: Number of nodes with available pods: 1
Mar 30 06:32:05.259: INFO: Node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal is running more than one daemon pod
Mar 30 06:32:06.314: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
... skipping 3 lines ...
Mar 30 06:32:06.344: INFO: Node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal is running more than one daemon pod
Mar 30 06:32:07.315: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 30 06:32:07.315: INFO: DaemonSet pods can't tolerate node test1-controlplane-1.c.k8s-jkns-gci-gce-multizone.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 30 06:32:07.315: INFO: DaemonSet pods can't tolerate node test1-controlplane-2.c.k8s-jkns-gci-gce-multizone.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 30 06:32:07.344: INFO: Number of nodes with available pods: 2
Mar 30 06:32:07.344: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:101
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4703, will wait for the garbage collector to delete the pods
Mar 30 06:32:07.521: INFO: Deleting DaemonSet.extensions daemon-set took: 34.700399ms
Mar 30 06:32:07.621: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.337077ms
... skipping 4 lines ...
Mar 30 06:32:16.211: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4703/pods","resourceVersion":"8044"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 30 06:32:16.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4703" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":283,"completed":79,"skipped":1359,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 30 06:32:36.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7027" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":283,"completed":80,"skipped":1364,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
Mar 30 06:32:42.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4763" for this suite.
STEP: Destroying namespace "webhook-4763-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":283,"completed":81,"skipped":1409,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 43 lines ...
Mar 30 06:33:01.810: INFO: Pod "test-rollover-deployment-78df7bc796-dz4tv" is available:
&Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-dz4tv test-rollover-deployment-78df7bc796- deployment-3374 /api/v1/namespaces/deployment-3374/pods/test-rollover-deployment-78df7bc796-dz4tv d932bd76-71ca-4cb1-bae6-48fd0059d7b0 8397 0 2020-03-30 06:32:49 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:78df7bc796] map[cni.projectcalico.org/podIP:192.168.130.253/32] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 ded1a43a-312b-423d-ae60-3ca6c917c4aa 0xc0003fcae7 0xc0003fcae8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kx5m7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kx5m7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kx5m7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 06:32:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 06:32:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 06:32:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 06:32:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.5,PodIP:192.168.130.253,StartTime:2020-03-30 06:32:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-30 06:32:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://bd10ed32f2079d7d7d22b1319dd87658677253f8c327e2f825fb67d3ea82ba9a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.130.253,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 30 06:33:01.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3374" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":283,"completed":82,"skipped":1426,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-57f9bffa-d081-4b2f-abca-e78a7eb7ae31
STEP: Creating a pod to test consume secrets
Mar 30 06:33:02.083: INFO: Waiting up to 5m0s for pod "pod-secrets-b05ee9b9-9dde-4bd6-890a-4c6f8f9591bd" in namespace "secrets-6170" to be "Succeeded or Failed"
Mar 30 06:33:02.114: INFO: Pod "pod-secrets-b05ee9b9-9dde-4bd6-890a-4c6f8f9591bd": Phase="Pending", Reason="", readiness=false. Elapsed: 30.835972ms
Mar 30 06:33:04.144: INFO: Pod "pod-secrets-b05ee9b9-9dde-4bd6-890a-4c6f8f9591bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060649698s
STEP: Saw pod success
Mar 30 06:33:04.144: INFO: Pod "pod-secrets-b05ee9b9-9dde-4bd6-890a-4c6f8f9591bd" satisfied condition "Succeeded or Failed"
Mar 30 06:33:04.173: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-secrets-b05ee9b9-9dde-4bd6-890a-4c6f8f9591bd container secret-volume-test: <nil>
STEP: delete the pod
Mar 30 06:33:04.247: INFO: Waiting for pod pod-secrets-b05ee9b9-9dde-4bd6-890a-4c6f8f9591bd to disappear
Mar 30 06:33:04.277: INFO: Pod pod-secrets-b05ee9b9-9dde-4bd6-890a-4c6f8f9591bd no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 30 06:33:04.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6170" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":83,"skipped":1455,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 06:33:04.525: INFO: Waiting up to 5m0s for pod "downwardapi-volume-db90a264-98d6-41c5-b2e9-510f32e3db6b" in namespace "projected-3262" to be "Succeeded or Failed"
Mar 30 06:33:04.555: INFO: Pod "downwardapi-volume-db90a264-98d6-41c5-b2e9-510f32e3db6b": Phase="Pending", Reason="", readiness=false. Elapsed: 29.244379ms
Mar 30 06:33:06.584: INFO: Pod "downwardapi-volume-db90a264-98d6-41c5-b2e9-510f32e3db6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058681402s
STEP: Saw pod success
Mar 30 06:33:06.584: INFO: Pod "downwardapi-volume-db90a264-98d6-41c5-b2e9-510f32e3db6b" satisfied condition "Succeeded or Failed"
Mar 30 06:33:06.614: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod downwardapi-volume-db90a264-98d6-41c5-b2e9-510f32e3db6b container client-container: <nil>
STEP: delete the pod
Mar 30 06:33:06.689: INFO: Waiting for pod downwardapi-volume-db90a264-98d6-41c5-b2e9-510f32e3db6b to disappear
Mar 30 06:33:06.719: INFO: Pod downwardapi-volume-db90a264-98d6-41c5-b2e9-510f32e3db6b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 30 06:33:06.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3262" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":84,"skipped":1463,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 14 lines ...
Mar 30 06:33:10.766: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  test/e2e/framework/framework.go:597
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 06:33:23.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3366" for this suite.
STEP: Destroying namespace "webhook-3366-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":283,"completed":85,"skipped":1485,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 06:33:23.878: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7313b446-902c-4088-b002-fc6335e34cd7" in namespace "downward-api-7497" to be "Succeeded or Failed"
Mar 30 06:33:23.909: INFO: Pod "downwardapi-volume-7313b446-902c-4088-b002-fc6335e34cd7": Phase="Pending", Reason="", readiness=false. Elapsed: 31.363631ms
Mar 30 06:33:25.940: INFO: Pod "downwardapi-volume-7313b446-902c-4088-b002-fc6335e34cd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061902686s
STEP: Saw pod success
Mar 30 06:33:25.940: INFO: Pod "downwardapi-volume-7313b446-902c-4088-b002-fc6335e34cd7" satisfied condition "Succeeded or Failed"
Mar 30 06:33:25.968: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod downwardapi-volume-7313b446-902c-4088-b002-fc6335e34cd7 container client-container: <nil>
STEP: delete the pod
Mar 30 06:33:26.048: INFO: Waiting for pod downwardapi-volume-7313b446-902c-4088-b002-fc6335e34cd7 to disappear
Mar 30 06:33:26.077: INFO: Pod downwardapi-volume-7313b446-902c-4088-b002-fc6335e34cd7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 30 06:33:26.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7497" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":283,"completed":86,"skipped":1534,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-xptx
STEP: Creating a pod to test atomic-volume-subpath
Mar 30 06:33:26.383: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xptx" in namespace "subpath-3566" to be "Succeeded or Failed"
Mar 30 06:33:26.413: INFO: Pod "pod-subpath-test-configmap-xptx": Phase="Pending", Reason="", readiness=false. Elapsed: 29.786473ms
Mar 30 06:33:28.442: INFO: Pod "pod-subpath-test-configmap-xptx": Phase="Running", Reason="", readiness=true. Elapsed: 2.059286318s
Mar 30 06:33:30.472: INFO: Pod "pod-subpath-test-configmap-xptx": Phase="Running", Reason="", readiness=true. Elapsed: 4.089185087s
Mar 30 06:33:32.502: INFO: Pod "pod-subpath-test-configmap-xptx": Phase="Running", Reason="", readiness=true. Elapsed: 6.118769772s
Mar 30 06:33:34.531: INFO: Pod "pod-subpath-test-configmap-xptx": Phase="Running", Reason="", readiness=true. Elapsed: 8.148559009s
Mar 30 06:33:36.561: INFO: Pod "pod-subpath-test-configmap-xptx": Phase="Running", Reason="", readiness=true. Elapsed: 10.177989062s
Mar 30 06:33:38.591: INFO: Pod "pod-subpath-test-configmap-xptx": Phase="Running", Reason="", readiness=true. Elapsed: 12.208347259s
Mar 30 06:33:40.620: INFO: Pod "pod-subpath-test-configmap-xptx": Phase="Running", Reason="", readiness=true. Elapsed: 14.237526047s
Mar 30 06:33:42.650: INFO: Pod "pod-subpath-test-configmap-xptx": Phase="Running", Reason="", readiness=true. Elapsed: 16.266765736s
Mar 30 06:33:44.679: INFO: Pod "pod-subpath-test-configmap-xptx": Phase="Running", Reason="", readiness=true. Elapsed: 18.29611016s
Mar 30 06:33:46.708: INFO: Pod "pod-subpath-test-configmap-xptx": Phase="Running", Reason="", readiness=true. Elapsed: 20.325432294s
Mar 30 06:33:48.738: INFO: Pod "pod-subpath-test-configmap-xptx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.354902056s
STEP: Saw pod success
Mar 30 06:33:48.738: INFO: Pod "pod-subpath-test-configmap-xptx" satisfied condition "Succeeded or Failed"
Mar 30 06:33:48.767: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-subpath-test-configmap-xptx container test-container-subpath-configmap-xptx: <nil>
STEP: delete the pod
Mar 30 06:33:48.847: INFO: Waiting for pod pod-subpath-test-configmap-xptx to disappear
Mar 30 06:33:48.877: INFO: Pod pod-subpath-test-configmap-xptx no longer exists
STEP: Deleting pod pod-subpath-test-configmap-xptx
Mar 30 06:33:48.877: INFO: Deleting pod "pod-subpath-test-configmap-xptx" in namespace "subpath-3566"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 30 06:33:48.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3566" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":283,"completed":87,"skipped":1548,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 125 lines ...
Mar 30 06:34:42.576: INFO: ss-2  test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 06:34:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 06:34:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 06:34:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 06:34:09 +0000 UTC  }]
Mar 30 06:34:42.576: INFO: 
Mar 30 06:34:42.576: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5704
Mar 30 06:34:43.608: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:34:43.943: INFO: rc: 1
Mar 30 06:34:43.943: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Mar 30 06:34:53.943: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:34:54.188: INFO: rc: 1
Mar 30 06:34:54.188: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:35:04.189: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:35:04.430: INFO: rc: 1
Mar 30 06:35:04.430: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:35:14.431: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:35:14.666: INFO: rc: 1
Mar 30 06:35:14.666: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:35:24.666: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:35:24.919: INFO: rc: 1
Mar 30 06:35:24.919: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:35:34.919: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:35:35.156: INFO: rc: 1
Mar 30 06:35:35.156: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:35:45.156: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:35:45.398: INFO: rc: 1
Mar 30 06:35:45.398: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:35:55.399: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:35:55.647: INFO: rc: 1
Mar 30 06:35:55.647: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:36:05.647: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:36:05.887: INFO: rc: 1
Mar 30 06:36:05.887: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:36:15.887: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:36:16.133: INFO: rc: 1
Mar 30 06:36:16.133: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:36:26.134: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:36:26.372: INFO: rc: 1
Mar 30 06:36:26.372: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:36:36.372: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:36:36.614: INFO: rc: 1
Mar 30 06:36:36.614: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:36:46.615: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:36:46.854: INFO: rc: 1
Mar 30 06:36:46.854: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:36:56.854: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:36:57.090: INFO: rc: 1
Mar 30 06:36:57.090: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:37:07.090: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:37:07.335: INFO: rc: 1
Mar 30 06:37:07.335: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:37:17.336: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:37:17.571: INFO: rc: 1
Mar 30 06:37:17.571: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:37:27.571: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:37:27.807: INFO: rc: 1
Mar 30 06:37:27.807: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:37:37.808: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:37:38.051: INFO: rc: 1
Mar 30 06:37:38.051: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:37:48.052: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:37:48.295: INFO: rc: 1
Mar 30 06:37:48.295: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:37:58.295: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:37:58.542: INFO: rc: 1
Mar 30 06:37:58.542: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:38:08.543: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:38:08.783: INFO: rc: 1
Mar 30 06:38:08.784: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:38:18.784: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:38:19.029: INFO: rc: 1
Mar 30 06:38:19.029: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:38:29.029: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:38:29.261: INFO: rc: 1
Mar 30 06:38:29.261: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:38:39.262: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:38:39.490: INFO: rc: 1
Mar 30 06:38:39.490: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:38:49.490: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:38:49.720: INFO: rc: 1
Mar 30 06:38:49.720: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:38:59.720: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:38:59.954: INFO: rc: 1
Mar 30 06:38:59.954: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:39:09.955: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:39:10.192: INFO: rc: 1
Mar 30 06:39:10.193: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:39:20.193: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:39:20.444: INFO: rc: 1
Mar 30 06:39:20.444: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:39:30.444: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:39:30.685: INFO: rc: 1
Mar 30 06:39:30.685: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:39:40.685: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:39:40.926: INFO: rc: 1
Mar 30 06:39:40.926: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Mar 30 06:39:50.926: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-5704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 30 06:39:51.165: INFO: rc: 1
Mar 30 06:39:51.165: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: 
Mar 30 06:39:51.165: INFO: Scaling statefulset ss to 0
Mar 30 06:39:51.256: INFO: Waiting for statefulset status.replicas updated to 0
... skipping 13 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:592
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":283,"completed":88,"skipped":1579,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 30 06:39:55.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3655" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":283,"completed":89,"skipped":1593,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-9d7e17d0-631b-405a-a2f4-202c78a589ce
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 06:41:29.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5575" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":90,"skipped":1613,"failed":0}
S
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 10 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 30 06:41:29.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6825" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":283,"completed":91,"skipped":1614,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Mar 30 06:41:32.285: INFO: Initial restart count of pod liveness-ec8cc22e-cd12-49b8-b768-f84039a50714 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 30 06:45:33.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-158" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":283,"completed":92,"skipped":1645,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 06:45:34.077: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85b9c66d-f784-4359-90a7-8d991a4c33b6" in namespace "projected-2049" to be "Succeeded or Failed"
Mar 30 06:45:34.107: INFO: Pod "downwardapi-volume-85b9c66d-f784-4359-90a7-8d991a4c33b6": Phase="Pending", Reason="", readiness=false. Elapsed: 29.361689ms
Mar 30 06:45:36.138: INFO: Pod "downwardapi-volume-85b9c66d-f784-4359-90a7-8d991a4c33b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061036427s
STEP: Saw pod success
Mar 30 06:45:36.139: INFO: Pod "downwardapi-volume-85b9c66d-f784-4359-90a7-8d991a4c33b6" satisfied condition "Succeeded or Failed"
Mar 30 06:45:36.167: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod downwardapi-volume-85b9c66d-f784-4359-90a7-8d991a4c33b6 container client-container: <nil>
STEP: delete the pod
Mar 30 06:45:36.252: INFO: Waiting for pod downwardapi-volume-85b9c66d-f784-4359-90a7-8d991a4c33b6 to disappear
Mar 30 06:45:36.281: INFO: Pod downwardapi-volume-85b9c66d-f784-4359-90a7-8d991a4c33b6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 30 06:45:36.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2049" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":93,"skipped":1651,"failed":0}
SS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 3 lines ...
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  test/e2e/common/pods.go:180
[It] should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 30 06:45:38.685: INFO: Waiting up to 5m0s for pod "client-envvars-ffb690d6-a01b-46ed-a528-e89f88ca94f2" in namespace "pods-2529" to be "Succeeded or Failed"
Mar 30 06:45:38.716: INFO: Pod "client-envvars-ffb690d6-a01b-46ed-a528-e89f88ca94f2": Phase="Pending", Reason="", readiness=false. Elapsed: 30.544162ms
Mar 30 06:45:40.745: INFO: Pod "client-envvars-ffb690d6-a01b-46ed-a528-e89f88ca94f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059909458s
STEP: Saw pod success
Mar 30 06:45:40.745: INFO: Pod "client-envvars-ffb690d6-a01b-46ed-a528-e89f88ca94f2" satisfied condition "Succeeded or Failed"
Mar 30 06:45:40.774: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod client-envvars-ffb690d6-a01b-46ed-a528-e89f88ca94f2 container env3cont: <nil>
STEP: delete the pod
Mar 30 06:45:40.862: INFO: Waiting for pod client-envvars-ffb690d6-a01b-46ed-a528-e89f88ca94f2 to disappear
Mar 30 06:45:40.891: INFO: Pod client-envvars-ffb690d6-a01b-46ed-a528-e89f88ca94f2 no longer exists
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 30 06:45:40.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2529" for this suite.
•{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":283,"completed":94,"skipped":1653,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 12 lines ...
STEP: reading a file in the container
Mar 30 06:45:44.970: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl exec --namespace=svcaccounts-3931 pod-service-account-ab81fc17-161a-408d-a277-136a447c407f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
Mar 30 06:45:45.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3931" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":283,"completed":95,"skipped":1667,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-8442/configmap-test-17c4303f-31ad-4e6a-8c7f-2e3e7d90df66
STEP: Creating a pod to test consume configMaps
Mar 30 06:45:45.739: INFO: Waiting up to 5m0s for pod "pod-configmaps-763a8236-7919-4d00-b0dc-2a707944bcca" in namespace "configmap-8442" to be "Succeeded or Failed"
Mar 30 06:45:45.769: INFO: Pod "pod-configmaps-763a8236-7919-4d00-b0dc-2a707944bcca": Phase="Pending", Reason="", readiness=false. Elapsed: 30.166066ms
Mar 30 06:45:47.799: INFO: Pod "pod-configmaps-763a8236-7919-4d00-b0dc-2a707944bcca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059711714s
STEP: Saw pod success
Mar 30 06:45:47.799: INFO: Pod "pod-configmaps-763a8236-7919-4d00-b0dc-2a707944bcca" satisfied condition "Succeeded or Failed"
Mar 30 06:45:47.827: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod pod-configmaps-763a8236-7919-4d00-b0dc-2a707944bcca container env-test: <nil>
STEP: delete the pod
Mar 30 06:45:47.903: INFO: Waiting for pod pod-configmaps-763a8236-7919-4d00-b0dc-2a707944bcca to disappear
Mar 30 06:45:47.933: INFO: Pod pod-configmaps-763a8236-7919-4d00-b0dc-2a707944bcca no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 06:45:47.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8442" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":283,"completed":96,"skipped":1725,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-dc30a824-b635-4e55-96e4-d12a327fec7a
STEP: Creating a pod to test consume configMaps
Mar 30 06:45:48.211: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ea5a4fc0-6330-4285-b52a-27548f2cb9d1" in namespace "projected-1159" to be "Succeeded or Failed"
Mar 30 06:45:48.243: INFO: Pod "pod-projected-configmaps-ea5a4fc0-6330-4285-b52a-27548f2cb9d1": Phase="Pending", Reason="", readiness=false. Elapsed: 32.063413ms
Mar 30 06:45:50.272: INFO: Pod "pod-projected-configmaps-ea5a4fc0-6330-4285-b52a-27548f2cb9d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061447303s
STEP: Saw pod success
Mar 30 06:45:50.273: INFO: Pod "pod-projected-configmaps-ea5a4fc0-6330-4285-b52a-27548f2cb9d1" satisfied condition "Succeeded or Failed"
Mar 30 06:45:50.301: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-projected-configmaps-ea5a4fc0-6330-4285-b52a-27548f2cb9d1 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 06:45:50.374: INFO: Waiting for pod pod-projected-configmaps-ea5a4fc0-6330-4285-b52a-27548f2cb9d1 to disappear
Mar 30 06:45:50.404: INFO: Pod pod-projected-configmaps-ea5a4fc0-6330-4285-b52a-27548f2cb9d1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 30 06:45:50.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1159" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":283,"completed":97,"skipped":1731,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 06:45:50.495: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 30 06:45:50.654: INFO: Waiting up to 5m0s for pod "pod-f6bf1016-2952-4bf5-9733-cc82f64cf928" in namespace "emptydir-4391" to be "Succeeded or Failed"
Mar 30 06:45:50.684: INFO: Pod "pod-f6bf1016-2952-4bf5-9733-cc82f64cf928": Phase="Pending", Reason="", readiness=false. Elapsed: 29.964195ms
Mar 30 06:45:52.713: INFO: Pod "pod-f6bf1016-2952-4bf5-9733-cc82f64cf928": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058910324s
STEP: Saw pod success
Mar 30 06:45:52.713: INFO: Pod "pod-f6bf1016-2952-4bf5-9733-cc82f64cf928" satisfied condition "Succeeded or Failed"
Mar 30 06:45:52.742: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-f6bf1016-2952-4bf5-9733-cc82f64cf928 container test-container: <nil>
STEP: delete the pod
Mar 30 06:45:52.818: INFO: Waiting for pod pod-f6bf1016-2952-4bf5-9733-cc82f64cf928 to disappear
Mar 30 06:45:52.848: INFO: Pod pod-f6bf1016-2952-4bf5-9733-cc82f64cf928 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 06:45:52.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4391" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":98,"skipped":1763,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 06:45:53.100: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7d45c514-ef8a-4db8-95a1-8cdb56bd5138" in namespace "projected-9535" to be "Succeeded or Failed"
Mar 30 06:45:53.131: INFO: Pod "downwardapi-volume-7d45c514-ef8a-4db8-95a1-8cdb56bd5138": Phase="Pending", Reason="", readiness=false. Elapsed: 30.728582ms
Mar 30 06:45:55.160: INFO: Pod "downwardapi-volume-7d45c514-ef8a-4db8-95a1-8cdb56bd5138": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059782984s
STEP: Saw pod success
Mar 30 06:45:55.160: INFO: Pod "downwardapi-volume-7d45c514-ef8a-4db8-95a1-8cdb56bd5138" satisfied condition "Succeeded or Failed"
Mar 30 06:45:55.189: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod downwardapi-volume-7d45c514-ef8a-4db8-95a1-8cdb56bd5138 container client-container: <nil>
STEP: delete the pod
Mar 30 06:45:55.262: INFO: Waiting for pod downwardapi-volume-7d45c514-ef8a-4db8-95a1-8cdb56bd5138 to disappear
Mar 30 06:45:55.292: INFO: Pod downwardapi-volume-7d45c514-ef8a-4db8-95a1-8cdb56bd5138 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 30 06:45:55.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9535" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":283,"completed":99,"skipped":1778,"failed":0}
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should have an terminated reason [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 30 06:45:59.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2880" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":283,"completed":100,"skipped":1783,"failed":0}
SSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 16 lines ...
  test/e2e/framework/framework.go:175
Mar 30 06:46:30.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7311" for this suite.
STEP: Destroying namespace "nsdeletetest-2530" for this suite.
Mar 30 06:46:30.332: INFO: Namespace nsdeletetest-2530 was already deleted
STEP: Destroying namespace "nsdeletetest-4999" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":283,"completed":101,"skipped":1786,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 06:46:30.365: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 30 06:46:30.521: INFO: Waiting up to 5m0s for pod "pod-79379995-3b7e-4f45-9f19-3f727a18110d" in namespace "emptydir-4281" to be "Succeeded or Failed"
Mar 30 06:46:30.551: INFO: Pod "pod-79379995-3b7e-4f45-9f19-3f727a18110d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.138989ms
Mar 30 06:46:32.580: INFO: Pod "pod-79379995-3b7e-4f45-9f19-3f727a18110d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059675012s
STEP: Saw pod success
Mar 30 06:46:32.580: INFO: Pod "pod-79379995-3b7e-4f45-9f19-3f727a18110d" satisfied condition "Succeeded or Failed"
Mar 30 06:46:32.609: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-79379995-3b7e-4f45-9f19-3f727a18110d container test-container: <nil>
STEP: delete the pod
Mar 30 06:46:32.680: INFO: Waiting for pod pod-79379995-3b7e-4f45-9f19-3f727a18110d to disappear
Mar 30 06:46:32.710: INFO: Pod pod-79379995-3b7e-4f45-9f19-3f727a18110d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 06:46:32.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4281" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":102,"skipped":1826,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 30 06:46:32.799: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 30 06:46:32.956: INFO: Waiting up to 5m0s for pod "downward-api-115a0431-757a-47ab-b0f6-80c130b48fab" in namespace "downward-api-1745" to be "Succeeded or Failed"
Mar 30 06:46:32.986: INFO: Pod "downward-api-115a0431-757a-47ab-b0f6-80c130b48fab": Phase="Pending", Reason="", readiness=false. Elapsed: 29.897156ms
Mar 30 06:46:35.016: INFO: Pod "downward-api-115a0431-757a-47ab-b0f6-80c130b48fab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059918031s
STEP: Saw pod success
Mar 30 06:46:35.016: INFO: Pod "downward-api-115a0431-757a-47ab-b0f6-80c130b48fab" satisfied condition "Succeeded or Failed"
Mar 30 06:46:35.045: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod downward-api-115a0431-757a-47ab-b0f6-80c130b48fab container dapi-container: <nil>
STEP: delete the pod
Mar 30 06:46:35.117: INFO: Waiting for pod downward-api-115a0431-757a-47ab-b0f6-80c130b48fab to disappear
Mar 30 06:46:35.147: INFO: Pod downward-api-115a0431-757a-47ab-b0f6-80c130b48fab no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 30 06:46:35.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1745" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":283,"completed":103,"skipped":1853,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 8 lines ...
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 30 06:46:42.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8109" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":283,"completed":104,"skipped":1854,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Mar 30 06:46:42.538: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's args
Mar 30 06:46:42.701: INFO: Waiting up to 5m0s for pod "var-expansion-5e37e21b-84d0-49eb-9d49-5b83544cbed6" in namespace "var-expansion-501" to be "Succeeded or Failed"
Mar 30 06:46:42.730: INFO: Pod "var-expansion-5e37e21b-84d0-49eb-9d49-5b83544cbed6": Phase="Pending", Reason="", readiness=false. Elapsed: 29.088522ms
Mar 30 06:46:44.760: INFO: Pod "var-expansion-5e37e21b-84d0-49eb-9d49-5b83544cbed6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058842221s
STEP: Saw pod success
Mar 30 06:46:44.760: INFO: Pod "var-expansion-5e37e21b-84d0-49eb-9d49-5b83544cbed6" satisfied condition "Succeeded or Failed"
Mar 30 06:46:44.789: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod var-expansion-5e37e21b-84d0-49eb-9d49-5b83544cbed6 container dapi-container: <nil>
STEP: delete the pod
Mar 30 06:46:44.863: INFO: Waiting for pod var-expansion-5e37e21b-84d0-49eb-9d49-5b83544cbed6 to disappear
Mar 30 06:46:44.893: INFO: Pod var-expansion-5e37e21b-84d0-49eb-9d49-5b83544cbed6 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 30 06:46:44.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-501" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":283,"completed":105,"skipped":1864,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 47 lines ...
Mar 30 06:46:49.275: INFO: stderr: ""
Mar 30 06:46:49.275: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 06:46:49.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5504" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":283,"completed":106,"skipped":1869,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-secret-k8v7
STEP: Creating a pod to test atomic-volume-subpath
Mar 30 06:46:49.579: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-k8v7" in namespace "subpath-3308" to be "Succeeded or Failed"
Mar 30 06:46:49.610: INFO: Pod "pod-subpath-test-secret-k8v7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.3607ms
Mar 30 06:46:51.640: INFO: Pod "pod-subpath-test-secret-k8v7": Phase="Running", Reason="", readiness=true. Elapsed: 2.06020004s
Mar 30 06:46:53.669: INFO: Pod "pod-subpath-test-secret-k8v7": Phase="Running", Reason="", readiness=true. Elapsed: 4.089821366s
Mar 30 06:46:55.699: INFO: Pod "pod-subpath-test-secret-k8v7": Phase="Running", Reason="", readiness=true. Elapsed: 6.119921017s
Mar 30 06:46:57.729: INFO: Pod "pod-subpath-test-secret-k8v7": Phase="Running", Reason="", readiness=true. Elapsed: 8.149206593s
Mar 30 06:46:59.758: INFO: Pod "pod-subpath-test-secret-k8v7": Phase="Running", Reason="", readiness=true. Elapsed: 10.178707126s
Mar 30 06:47:01.788: INFO: Pod "pod-subpath-test-secret-k8v7": Phase="Running", Reason="", readiness=true. Elapsed: 12.208572602s
Mar 30 06:47:03.817: INFO: Pod "pod-subpath-test-secret-k8v7": Phase="Running", Reason="", readiness=true. Elapsed: 14.23771704s
Mar 30 06:47:05.847: INFO: Pod "pod-subpath-test-secret-k8v7": Phase="Running", Reason="", readiness=true. Elapsed: 16.26773684s
Mar 30 06:47:07.878: INFO: Pod "pod-subpath-test-secret-k8v7": Phase="Running", Reason="", readiness=true. Elapsed: 18.298480742s
Mar 30 06:47:09.908: INFO: Pod "pod-subpath-test-secret-k8v7": Phase="Running", Reason="", readiness=true. Elapsed: 20.328338125s
Mar 30 06:47:11.937: INFO: Pod "pod-subpath-test-secret-k8v7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.357815958s
STEP: Saw pod success
Mar 30 06:47:11.937: INFO: Pod "pod-subpath-test-secret-k8v7" satisfied condition "Succeeded or Failed"
Mar 30 06:47:11.967: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-subpath-test-secret-k8v7 container test-container-subpath-secret-k8v7: <nil>
STEP: delete the pod
Mar 30 06:47:12.052: INFO: Waiting for pod pod-subpath-test-secret-k8v7 to disappear
Mar 30 06:47:12.162: INFO: Pod pod-subpath-test-secret-k8v7 no longer exists
STEP: Deleting pod pod-subpath-test-secret-k8v7
Mar 30 06:47:12.162: INFO: Deleting pod "pod-subpath-test-secret-k8v7" in namespace "subpath-3308"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 30 06:47:12.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3308" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":283,"completed":107,"skipped":1923,"failed":0}

------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Mar 30 06:47:14.593: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 30 06:47:14.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2920" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":283,"completed":108,"skipped":1923,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 11 lines ...
Mar 30 06:47:16.993: INFO: Trying to dial the pod
Mar 30 06:47:22.089: INFO: Controller my-hostname-basic-86c33acc-35b6-4d65-bb95-0ab83f60c876: Got expected result from replica 1 [my-hostname-basic-86c33acc-35b6-4d65-bb95-0ab83f60c876-hbx9c]: "my-hostname-basic-86c33acc-35b6-4d65-bb95-0ab83f60c876-hbx9c", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 30 06:47:22.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9632" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":283,"completed":109,"skipped":1932,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Mar 30 06:47:32.569: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:32.600: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:32.693: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:32.724: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:32.756: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:32.787: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:32.848: INFO: Lookups using dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9294.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9294.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local jessie_udp@dns-test-service-2.dns-9294.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9294.svc.cluster.local]

Mar 30 06:47:37.880: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:37.911: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:37.941: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:37.972: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:38.065: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:38.095: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:38.126: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:38.157: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:38.219: INFO: Lookups using dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9294.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9294.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local jessie_udp@dns-test-service-2.dns-9294.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9294.svc.cluster.local]

Mar 30 06:47:42.882: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:42.912: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:42.942: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:42.972: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:43.064: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:43.095: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:43.126: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:43.157: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:43.219: INFO: Lookups using dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9294.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9294.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local jessie_udp@dns-test-service-2.dns-9294.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9294.svc.cluster.local]

Mar 30 06:47:47.881: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:47.912: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:47.943: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:47.972: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:48.064: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:48.095: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:48.126: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:48.157: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:48.219: INFO: Lookups using dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9294.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9294.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local jessie_udp@dns-test-service-2.dns-9294.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9294.svc.cluster.local]

Mar 30 06:47:52.879: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:52.910: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:52.939: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:52.970: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:53.064: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:53.095: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:53.126: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:53.157: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9294.svc.cluster.local from pod dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda: the server could not find the requested resource (get pods dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda)
Mar 30 06:47:53.219: INFO: Lookups using dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9294.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9294.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9294.svc.cluster.local jessie_udp@dns-test-service-2.dns-9294.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9294.svc.cluster.local]

Mar 30 06:47:58.219: INFO: DNS probes using dns-9294/dns-test-7b2cdbdd-dede-4e54-84fd-4c924da5efda succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 30 06:47:58.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9294" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":283,"completed":110,"skipped":1958,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 30 06:47:58.529: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 06:48:01.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8067" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":283,"completed":111,"skipped":1979,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Mar 30 06:48:02.293: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override all
Mar 30 06:48:03.485: INFO: Waiting up to 5m0s for pod "client-containers-a1381760-6ffe-4594-bd92-61092ee16657" in namespace "containers-803" to be "Succeeded or Failed"
Mar 30 06:48:03.514: INFO: Pod "client-containers-a1381760-6ffe-4594-bd92-61092ee16657": Phase="Pending", Reason="", readiness=false. Elapsed: 28.31286ms
Mar 30 06:48:05.543: INFO: Pod "client-containers-a1381760-6ffe-4594-bd92-61092ee16657": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058061941s
STEP: Saw pod success
Mar 30 06:48:05.543: INFO: Pod "client-containers-a1381760-6ffe-4594-bd92-61092ee16657" satisfied condition "Succeeded or Failed"
Mar 30 06:48:05.572: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod client-containers-a1381760-6ffe-4594-bd92-61092ee16657 container test-container: <nil>
STEP: delete the pod
Mar 30 06:48:05.661: INFO: Waiting for pod client-containers-a1381760-6ffe-4594-bd92-61092ee16657 to disappear
Mar 30 06:48:05.691: INFO: Pod client-containers-a1381760-6ffe-4594-bd92-61092ee16657 no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 30 06:48:05.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-803" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":283,"completed":112,"skipped":1990,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 34 lines ...
Mar 30 06:48:11.298: INFO: stdout: "service/rm3 exposed\n"
Mar 30 06:48:11.328: INFO: Service rm3 in namespace kubectl-4670 found.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 06:48:13.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4670" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":283,"completed":113,"skipped":2002,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 20 lines ...
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 30 06:48:35.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1361" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":283,"completed":114,"skipped":2014,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 26 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 30 06:48:44.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4962" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":283,"completed":115,"skipped":2017,"failed":0}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Updating configmap configmap-test-upd-c9250009-5000-4a70-9f59-a7a490492411
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 06:49:57.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6974" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":116,"skipped":2022,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 06:49:58.251: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f7f2a201-4f5c-4707-a14c-7294a3302759" in namespace "projected-1185" to be "Succeeded or Failed"
Mar 30 06:49:58.282: INFO: Pod "downwardapi-volume-f7f2a201-4f5c-4707-a14c-7294a3302759": Phase="Pending", Reason="", readiness=false. Elapsed: 30.814189ms
Mar 30 06:50:00.311: INFO: Pod "downwardapi-volume-f7f2a201-4f5c-4707-a14c-7294a3302759": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060155289s
STEP: Saw pod success
Mar 30 06:50:00.311: INFO: Pod "downwardapi-volume-f7f2a201-4f5c-4707-a14c-7294a3302759" satisfied condition "Succeeded or Failed"
Mar 30 06:50:00.340: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod downwardapi-volume-f7f2a201-4f5c-4707-a14c-7294a3302759 container client-container: <nil>
STEP: delete the pod
Mar 30 06:50:00.422: INFO: Waiting for pod downwardapi-volume-f7f2a201-4f5c-4707-a14c-7294a3302759 to disappear
Mar 30 06:50:00.452: INFO: Pod downwardapi-volume-f7f2a201-4f5c-4707-a14c-7294a3302759 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 30 06:50:00.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1185" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":283,"completed":117,"skipped":2028,"failed":0}

------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
... skipping 7 lines ...
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  test/e2e/framework/framework.go:175
Mar 30 06:50:00.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-272" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":283,"completed":118,"skipped":2028,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-84a16981-5541-4fbe-ad41-8cfc050c3e9c
STEP: Creating a pod to test consume secrets
Mar 30 06:50:00.986: INFO: Waiting up to 5m0s for pod "pod-secrets-ef6c1769-f27f-4e7c-afce-5b8cb1bd6a68" in namespace "secrets-1259" to be "Succeeded or Failed"
Mar 30 06:50:01.015: INFO: Pod "pod-secrets-ef6c1769-f27f-4e7c-afce-5b8cb1bd6a68": Phase="Pending", Reason="", readiness=false. Elapsed: 29.085219ms
Mar 30 06:50:03.045: INFO: Pod "pod-secrets-ef6c1769-f27f-4e7c-afce-5b8cb1bd6a68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058427423s
STEP: Saw pod success
Mar 30 06:50:03.045: INFO: Pod "pod-secrets-ef6c1769-f27f-4e7c-afce-5b8cb1bd6a68" satisfied condition "Succeeded or Failed"
Mar 30 06:50:03.073: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod pod-secrets-ef6c1769-f27f-4e7c-afce-5b8cb1bd6a68 container secret-volume-test: <nil>
STEP: delete the pod
Mar 30 06:50:03.162: INFO: Waiting for pod pod-secrets-ef6c1769-f27f-4e7c-afce-5b8cb1bd6a68 to disappear
Mar 30 06:50:03.195: INFO: Pod pod-secrets-ef6c1769-f27f-4e7c-afce-5b8cb1bd6a68 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 30 06:50:03.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1259" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":119,"skipped":2064,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 9 lines ...
STEP: creating pod
Mar 30 06:50:05.572: INFO: Pod pod-hostip-55e14cc3-38d2-4261-bd01-1462e3f12774 has hostIP: 10.150.0.5
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 30 06:50:05.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7829" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":283,"completed":120,"skipped":2081,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
W0330 06:50:46.048561   25071 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 30 06:50:46.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1370" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":283,"completed":121,"skipped":2108,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 35 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 30 06:50:56.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0330 06:50:56.770769   25071 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
STEP: Destroying namespace "gc-383" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":283,"completed":122,"skipped":2131,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 28 lines ...
Mar 30 06:52:12.817: INFO: Terminating ReplicationController wrapped-volume-race-7072e181-9b9c-4608-bf2c-b10bdfe94ddf pods took: 300.246396ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:175
Mar 30 06:52:28.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-5859" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":283,"completed":123,"skipped":2139,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 30 06:52:30.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5407" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":124,"skipped":2154,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 06:52:30.731: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee58f48c-9b98-4e9c-bb92-7c2e7045ba49" in namespace "projected-2078" to be "Succeeded or Failed"
Mar 30 06:52:30.760: INFO: Pod "downwardapi-volume-ee58f48c-9b98-4e9c-bb92-7c2e7045ba49": Phase="Pending", Reason="", readiness=false. Elapsed: 28.746431ms
Mar 30 06:52:32.789: INFO: Pod "downwardapi-volume-ee58f48c-9b98-4e9c-bb92-7c2e7045ba49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058083405s
STEP: Saw pod success
Mar 30 06:52:32.789: INFO: Pod "downwardapi-volume-ee58f48c-9b98-4e9c-bb92-7c2e7045ba49" satisfied condition "Succeeded or Failed"
Mar 30 06:52:32.818: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod downwardapi-volume-ee58f48c-9b98-4e9c-bb92-7c2e7045ba49 container client-container: <nil>
STEP: delete the pod
Mar 30 06:52:32.907: INFO: Waiting for pod downwardapi-volume-ee58f48c-9b98-4e9c-bb92-7c2e7045ba49 to disappear
Mar 30 06:52:32.937: INFO: Pod downwardapi-volume-ee58f48c-9b98-4e9c-bb92-7c2e7045ba49 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 30 06:52:32.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2078" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":283,"completed":125,"skipped":2167,"failed":0}

------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
W0330 06:52:39.395949   25071 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 30 06:52:39.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-246" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":283,"completed":126,"skipped":2167,"failed":0}
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 2 lines ...
Mar 30 06:52:39.463: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Mar 30 06:52:41.747: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 30 06:52:41.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2718" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":283,"completed":127,"skipped":2170,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 51 lines ...
Mar 30 06:52:50.478: INFO: stderr: ""
Mar 30 06:52:50.478: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 06:52:50.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1767" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":283,"completed":128,"skipped":2174,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 06:53:08.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3567" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":283,"completed":129,"skipped":2185,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 100 lines ...
Mar 30 06:54:56.183: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9669/pods","resourceVersion":"14650"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 30 06:54:56.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9669" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":283,"completed":130,"skipped":2208,"failed":0}
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 34 lines ...
Mar 30 06:57:08.368: INFO: Deleting pod "var-expansion-b04eb048-23e1-4b84-bba3-75812c997113" in namespace "var-expansion-4841"
Mar 30 06:57:08.402: INFO: Wait up to 5m0s for pod "var-expansion-b04eb048-23e1-4b84-bba3-75812c997113" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 30 06:57:46.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4841" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":283,"completed":131,"skipped":2213,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 30 06:57:57.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6137" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":283,"completed":132,"skipped":2227,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 27 lines ...
Mar 30 06:58:16.465: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar 30 06:58:16.494: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 30 06:58:16.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7689" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":283,"completed":133,"skipped":2240,"failed":0}

------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 30 06:58:18.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9313" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":134,"skipped":2240,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-7167/configmap-test-fba7a619-a01f-41a7-8fe4-7eb54b20f999
STEP: Creating a pod to test consume configMaps
Mar 30 06:58:19.168: INFO: Waiting up to 5m0s for pod "pod-configmaps-509b9a6e-39fa-4021-849b-060757faaa83" in namespace "configmap-7167" to be "Succeeded or Failed"
Mar 30 06:58:19.198: INFO: Pod "pod-configmaps-509b9a6e-39fa-4021-849b-060757faaa83": Phase="Pending", Reason="", readiness=false. Elapsed: 29.465065ms
Mar 30 06:58:21.227: INFO: Pod "pod-configmaps-509b9a6e-39fa-4021-849b-060757faaa83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058806538s
STEP: Saw pod success
Mar 30 06:58:21.227: INFO: Pod "pod-configmaps-509b9a6e-39fa-4021-849b-060757faaa83" satisfied condition "Succeeded or Failed"
Mar 30 06:58:21.256: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-configmaps-509b9a6e-39fa-4021-849b-060757faaa83 container env-test: <nil>
STEP: delete the pod
Mar 30 06:58:21.331: INFO: Waiting for pod pod-configmaps-509b9a6e-39fa-4021-849b-060757faaa83 to disappear
Mar 30 06:58:21.361: INFO: Pod pod-configmaps-509b9a6e-39fa-4021-849b-060757faaa83 no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 06:58:21.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7167" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":283,"completed":135,"skipped":2303,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 22 lines ...
Mar 30 06:58:31.972: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3675 /api/v1/namespaces/watch-3675/configmaps/e2e-watch-test-label-changed c3c718d5-68a1-4160-bd3b-0fff080d22a8 15289 0 2020-03-30 06:58:21 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 30 06:58:31.973: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3675 /api/v1/namespaces/watch-3675/configmaps/e2e-watch-test-label-changed c3c718d5-68a1-4160-bd3b-0fff080d22a8 15290 0 2020-03-30 06:58:21 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 30 06:58:31.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3675" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":283,"completed":136,"skipped":2335,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 06:58:32.226: INFO: Waiting up to 5m0s for pod "downwardapi-volume-202643cf-b5d1-4cee-8e45-3f42f074000d" in namespace "downward-api-7687" to be "Succeeded or Failed"
Mar 30 06:58:32.256: INFO: Pod "downwardapi-volume-202643cf-b5d1-4cee-8e45-3f42f074000d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.200902ms
Mar 30 06:58:34.285: INFO: Pod "downwardapi-volume-202643cf-b5d1-4cee-8e45-3f42f074000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059786717s
STEP: Saw pod success
Mar 30 06:58:34.285: INFO: Pod "downwardapi-volume-202643cf-b5d1-4cee-8e45-3f42f074000d" satisfied condition "Succeeded or Failed"
Mar 30 06:58:34.314: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod downwardapi-volume-202643cf-b5d1-4cee-8e45-3f42f074000d container client-container: <nil>
STEP: delete the pod
Mar 30 06:58:34.389: INFO: Waiting for pod downwardapi-volume-202643cf-b5d1-4cee-8e45-3f42f074000d to disappear
Mar 30 06:58:34.419: INFO: Pod downwardapi-volume-202643cf-b5d1-4cee-8e45-3f42f074000d no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 30 06:58:34.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7687" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":283,"completed":137,"skipped":2341,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-12350132-1bbf-4c37-9ff4-087761beb96a
STEP: Creating a pod to test consume configMaps
Mar 30 06:58:34.695: INFO: Waiting up to 5m0s for pod "pod-configmaps-2bd35b96-a5e1-4482-9dfc-4c96e5e9eb33" in namespace "configmap-394" to be "Succeeded or Failed"
Mar 30 06:58:34.725: INFO: Pod "pod-configmaps-2bd35b96-a5e1-4482-9dfc-4c96e5e9eb33": Phase="Pending", Reason="", readiness=false. Elapsed: 29.63823ms
Mar 30 06:58:36.754: INFO: Pod "pod-configmaps-2bd35b96-a5e1-4482-9dfc-4c96e5e9eb33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058549169s
STEP: Saw pod success
Mar 30 06:58:36.754: INFO: Pod "pod-configmaps-2bd35b96-a5e1-4482-9dfc-4c96e5e9eb33" satisfied condition "Succeeded or Failed"
Mar 30 06:58:36.783: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-configmaps-2bd35b96-a5e1-4482-9dfc-4c96e5e9eb33 container configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 06:58:36.856: INFO: Waiting for pod pod-configmaps-2bd35b96-a5e1-4482-9dfc-4c96e5e9eb33 to disappear
Mar 30 06:58:36.885: INFO: Pod pod-configmaps-2bd35b96-a5e1-4482-9dfc-4c96e5e9eb33 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 06:58:36.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-394" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":283,"completed":138,"skipped":2353,"failed":0}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-ac7d986f-5746-45d6-9a05-06f7c8c5f272
STEP: Creating a pod to test consume secrets
Mar 30 06:58:37.171: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-984ecf00-f873-455b-b7e2-467967048f16" in namespace "projected-7944" to be "Succeeded or Failed"
Mar 30 06:58:37.201: INFO: Pod "pod-projected-secrets-984ecf00-f873-455b-b7e2-467967048f16": Phase="Pending", Reason="", readiness=false. Elapsed: 29.597056ms
Mar 30 06:58:39.231: INFO: Pod "pod-projected-secrets-984ecf00-f873-455b-b7e2-467967048f16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059948958s
STEP: Saw pod success
Mar 30 06:58:39.231: INFO: Pod "pod-projected-secrets-984ecf00-f873-455b-b7e2-467967048f16" satisfied condition "Succeeded or Failed"
Mar 30 06:58:39.260: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-projected-secrets-984ecf00-f873-455b-b7e2-467967048f16 container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 30 06:58:39.336: INFO: Waiting for pod pod-projected-secrets-984ecf00-f873-455b-b7e2-467967048f16 to disappear
Mar 30 06:58:39.366: INFO: Pod pod-projected-secrets-984ecf00-f873-455b-b7e2-467967048f16 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 30 06:58:39.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7944" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":139,"skipped":2356,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
Mar 30 06:58:44.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5759" for this suite.
STEP: Destroying namespace "webhook-5759-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":283,"completed":140,"skipped":2367,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 30 06:58:44.482: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Mar 30 06:58:50.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9147" for this suite.
•{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":283,"completed":141,"skipped":2375,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 20 lines ...
Mar 30 06:58:55.315: INFO: Pod "test-cleanup-deployment-577c77b589-2hjlt" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-2hjlt test-cleanup-deployment-577c77b589- deployment-8990 /api/v1/namespaces/deployment-8990/pods/test-cleanup-deployment-577c77b589-2hjlt 341669da-fdc3-48ea-8771-f03fdd25034a 15647 0 2020-03-30 06:58:53 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:577c77b589] map[cni.projectcalico.org/podIP:192.168.130.236/32] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 a3c2c05d-3eaf-4761-8b8d-78dbcf99c85e 0xc0029b7b87 0xc0029b7b88}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rncpw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rncpw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rncpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 06:58:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 06:58:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 06:58:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 06:58:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.5,PodIP:192.168.130.236,StartTime:2020-03-30 06:58:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-30 06:58:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://c0eaf761fe39b1707c71b891d39cd39bc1db9accd5c982b9f2ce39c14abeffb5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.130.236,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 30 06:58:55.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8990" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":283,"completed":142,"skipped":2383,"failed":0}
SS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Aggregator
... skipping 15 lines ...
Mar 30 06:59:04.513: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721148336, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721148336, loc:(*time.Location)(0x7b56f20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721148336, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721148336, loc:(*time.Location)(0x7b56f20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-54b47bf96b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 30 06:59:06.512: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721148336, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721148336, loc:(*time.Location)(0x7b56f20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721148336, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721148336, loc:(*time.Location)(0x7b56f20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-54b47bf96b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 30 06:59:08.512: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721148336, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721148336, loc:(*time.Location)(0x7b56f20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721148336, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721148336, loc:(*time.Location)(0x7b56f20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-54b47bf96b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 30 07:00:10.798: INFO: Waited 1m0.252518492s for the sample-apiserver to be ready to handle requests.
Mar 30 07:00:10.798: INFO: current APIService: {"metadata":{"name":"v1alpha1.wardle.example.com","selfLink":"/apis/apiregistration.k8s.io/v1/apiservices/v1alpha1.wardle.example.com","uid":"1381d401-6808-4500-8adb-5fac17d5b0e8","resourceVersion":"15817","creationTimestamp":"2020-03-30T06:59:10Z"},"spec":{"service":{"namespace":"aggregator-2546","name":"sample-api","port":7443},"group":"wardle.example.com","version":"v1alpha1","caBundle":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyRENDQWNDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFkTVJzd0dRWURWUVFERXhKbE1tVXQKYzJWeWRtVnlMV05sY25RdFkyRXdIaGNOTWpBd016TXdNRFkxT0RVMVdoY05NekF3TXpJNE1EWTFPRFUxV2pBZApNUnN3R1FZRFZRUURFeEpsTW1VdGMyVnlkbVZ5TFdObGNuUXRZMkV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUUN6ckZxSUk0YUwxNmh3QVpOUjQ4dDhWaEIwc211Y0ZEVkUrbnFQNzhLc3dPeFMKeGY2ZGZTbW10a1ZxM2Y5ZmJUZU1EcEovc1F1T0QyZW05MnFQWjlGMndmenNKT0FoREQxZjhYY085eWtZR0hrdgppek8rQjJtNUdRQzBXRDBYWWtsSnR5ZVRHWE9aTTE3NEhKM3FtbSsvcDNCdjVxNnhtR1ZMTkxVUjA4bFllTXp1CnZWeDBjeW9EbVNacWErM2pXNkxaMEVBT3FYM3E1NHJma1hzbjV0VnptT0xwcnN6T1ZTWWgzYnRxRTFxVkRvb24KYnkwc0FmbWRUTzRZVnYwL0xkTk9VbktubE1IbWRNN090M1FtTElZY21KaDE2YWRmYkUxa0d1OXNhckw2NXFQawpSbjVLUUs3RWlCWm8ybjhUQVRpeTV3T3paZDlYT0RieWVlY2RnSjROQWdNQkFBR2pJekFoTUE0R0ExVWREd0VCCi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFDbVdsR0oKTWVIZlpQWERIR1hwU0pVT0hCc285cTNWQTNHYVJBQkRiaFV1VTlqekx1Y3VHemtRTVVUcWt6L2xvRjdUNjJ3VwpEVitza085MzNORTlKSTJNaXp3LzhNYkprdlVMNGs3Y3pPdzFHbVlPZnV1WlpmdDBUMi9vLzVyV2ptRXp6Zk0yCno1ckJSU2F0bXRibGJ4ejhGRGZ1eG5XTFVIZ25uaHF0cHdnYVVaZ21VZ0VVQnd3eXAzV3JyYzEyZk1Na09Rd2QKdXV0MWtCT09ZNGZramFkRVFIdlQvdVY0TUIyQjUvaFZLQVkwbVdTMmMvTE1lTzFVZmkvR3ZMSkhVaXJiWmtGVwpObjFMcUFrY0VSZHJOVUFEeVptd2RiUDhMeDNBak5rZWZHUzVZQTRoRHpqQU91YjlkU0xETkhiK1lHNDJoZ0U5CnhEbUVRY2RvblowckZYSncKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=","groupPriorityMinimum":2000,"versionPriority":200},"status":{"conditions":[{"type":"Available","status":"False","lastTransitionTime":"2020-03-30T06:59:10Z","reason":"FailedDiscoveryCheck","message":"failing or missing response from https://10.105.163.115:7443/apis/wardle.example.com/v1alpha1: bad status from https://10.105.163.115:7443/apis/wardle.example.com/v1alpha1: 403"}]}}
Mar 30 07:00:10.798: INFO: current pods: {"metadata":{"selfLink":"/api/v1/namespaces/aggregator-2546/pods","resourceVersion":"15930"},"items":[{"metadata":{"name":"sample-apiserver-deployment-54b47bf96b-rhlvh","generateName":"sample-apiserver-deployment-54b47bf96b-","namespace":"aggregator-2546","selfLink":"/api/v1/namespaces/aggregator-2546/pods/sample-apiserver-deployment-54b47bf96b-rhlvh","uid":"c5b00dd1-f746-4a32-ac5b-4faeb3115d89","resourceVersion":"15808","creationTimestamp":"2020-03-30T06:58:56Z","labels":{"apiserver":"true","app":"sample-apiserver","pod-template-hash":"54b47bf96b"},"annotations":{"cni.projectcalico.org/podIP":"192.168.130.237/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"sample-apiserver-deployment-54b47bf96b","uid":"d30af610-cc78-43e2-b231-bf4b839d3eb2","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"apiserver-certs","secret":{"secretName":"sample-apiserver-secret","defaultMode":420}},{"name":"default-token-4pv6v","secret":{"secretName":"default-token-4pv6v","defaultMode":420}}],"containers":[{"name":"sample-apiserver","image":"gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17","args":["--etcd-servers=http://127.0.0.1:2379","--tls-cert-file=/apiserver.local.config/certificates/tls.crt","--tls-private-key-file=/apiserver.local.config/certificates/tls.key","--audit-log-path=-","--audit-log-maxage=0","--audit-log-maxbackup=0"],"resources":{},"volumeMounts":[{"name":"apiserver-certs","readOnly":true,"mountPath":"/apiserver.local.config/certificates"},{"name":"default-token-4pv6v","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"},{"name":"etcd","image":"k8s.gcr.io/etcd:3.4.4","command":["/usr/local/bin/etcd","--listen-client-urls","http://127.0.0.1:2379","--advertise-client-urls","http://127.0.0.1:2379"],"resources":{},"volumeMounts":[{"name":"default-token-4pv6v","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-03-30T06:58:56Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-03-30T06:59:10Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-03-30T06:59:10Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-03-30T06:58:56Z"}],"hostIP":"10.150.0.5","podIP":"192.168.130.237","podIPs":[{"ip":"192.168.130.237"}],"startTime":"2020-03-30T06:58:56Z","containerStatuses":[{"name":"etcd","state":{"running":{"startedAt":"2020-03-30T06:59:09Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/etcd:3.4.4","imageID":"k8s.gcr.io/etcd@sha256:e10ee22e7b56d08b7cb7da2a390863c445d66a7284294cee8c9decbfb3ba4359","containerID":"containerd://1a92aac3ab4410a9653645da75893619734390b4779b27ce5589ad12383267fb","started":true},{"name":"sample-apiserver","state":{"running":{"startedAt":"2020-03-30T06:58:59Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17","imageID":"gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55","containerID":"containerd://7b53c811492f86c31d3b92755e6d863cfc9e91b216541fdc2d78055b8056e73e","started":true}],"qosClass":"BestEffort"}}]}
Mar 30 07:00:10.889: INFO: logs of sample-apiserver-deployment-54b47bf96b-rhlvh/sample-apiserver (error: <nil>): W0330 06:59:00.340458       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0330 06:59:00.340945       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
I0330 06:59:00.364894       1 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook,BanFlunder.
I0330 06:59:00.364931       1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
I0330 06:59:00.367636       1 client.go:361] parsed scheme: "endpoint"
I0330 06:59:00.367766       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0330 06:59:00.368148       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0330 06:59:00.623087       1 client.go:361] parsed scheme: "endpoint"
I0330 06:59:00.623188       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0330 06:59:00.625771       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0330 06:59:01.368650       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0330 06:59:01.626206       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0330 06:59:02.784864       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0330 06:59:03.426697       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0330 06:59:05.604473       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0330 06:59:06.244884       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0330 06:59:10.538222       1 client.go:361] parsed scheme: "endpoint"
I0330 06:59:10.538264       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0330 06:59:10.541799       1 client.go:361] parsed scheme: "endpoint"
I0330 06:59:10.541837       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0330 06:59:10.543846       1 client.go:361] parsed scheme: "endpoint"
I0330 06:59:10.543943       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0330 06:59:10.592011       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0330 06:59:10.592271       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0330 06:59:10.592441       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0330 06:59:10.592538       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0330 06:59:10.592776       1 secure_serving.go:178] Serving securely on [::]:443
I0330 06:59:10.593149       1 dynamic_serving_content.go:129] Starting serving-cert::/apiserver.local.config/certificates/tls.crt::/apiserver.local.config/certificates/tls.key
I0330 06:59:10.593199       1 tlsconfig.go:219] Starting DynamicServingCertificateController
E0330 06:59:10.599355       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:10.599703       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:11.602569       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:11.615333       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:12.604583       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:12.617392       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:13.606551       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:13.619690       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:14.608647       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:14.621533       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:15.610701       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:15.623467       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:16.612626       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:16.625404       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:17.614650       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:17.627535       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:18.616946       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:18.629432       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:19.619072       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:19.631629       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:20.621255       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:20.633970       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:21.623196       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:21.635801       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:22.625253       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:22.637646       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:23.627392       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:23.639743       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:24.629541       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:24.641659       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:25.631734       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:25.643610       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:26.634921       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:26.645900       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:27.636908       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:27.647986       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:28.638880       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:28.649891       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:29.640734       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:29.651587       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:30.643243       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:30.653591       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:31.646530       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:31.655509       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:32.648469       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:32.657351       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:33.650399       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:33.659513       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:34.652464       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:34.661411       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:35.654391       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:35.663274       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:36.656434       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:36.665152       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:37.658471       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:37.667035       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:38.660644       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:38.669108       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:39.662736       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:39.671238       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:40.665337       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:40.673042       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:41.667362       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:41.674916       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:42.669191       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:42.676659       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:43.671363       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:43.678674       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:44.673351       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:44.680406       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:45.675229       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:45.682367       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:46.677666       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:46.684172       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:47.679745       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:47.686028       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:48.681776       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:48.688066       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:49.683861       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:49.689689       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:50.686212       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:50.691440       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:51.688142       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:51.693288       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:52.690318       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:52.695004       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:53.692343       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:53.696639       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:54.695622       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:54.698217       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:55.697641       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:55.699898       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:56.699723       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:56.701597       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:57.702700       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:57.703238       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:58.704983       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:58.705703       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:59.706977       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 06:59:59.708935       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 07:00:00.709639       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 07:00:00.711619       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 07:00:01.711839       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 07:00:01.714543       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 07:00:02.713977       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 07:00:02.716143       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 07:00:03.716175       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 07:00:03.718030       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 07:00:04.718175       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 07:00:04.721446       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 07:00:05.720111       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 07:00:05.723019       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 07:00:06.722080       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 07:00:06.724748       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 07:00:07.724160       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 07:00:07.726416       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 07:00:08.726153       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 07:00:08.728257       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 07:00:09.728145       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 07:00:09.730653       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 07:00:10.730364       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0330 07:00:10.733016       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-2546:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"

Mar 30 07:00:10.923: INFO: logs of sample-apiserver-deployment-54b47bf96b-rhlvh/etcd (error: <nil>): [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-03-30 06:59:10.008658 I | etcdmain: etcd Version: 3.4.4
2020-03-30 06:59:10.008879 I | etcdmain: Git SHA: c65a9e2dd
2020-03-30 06:59:10.008885 I | etcdmain: Go Version: go1.12.12
2020-03-30 06:59:10.008891 I | etcdmain: Go OS/Arch: linux/amd64
2020-03-30 06:59:10.008896 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2020-03-30 06:59:10.008907 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
... skipping 26 lines ...
2020-03-30 06:59:10.535381 I | embed: ready to serve client requests
2020-03-30 06:59:10.535487 I | etcdserver: setting up the initial cluster version to 3.4
2020-03-30 06:59:10.535810 N | etcdserver/membership: set the initial cluster version to 3.4
2020-03-30 06:59:10.536003 I | etcdserver/api: enabled capabilities for version 3.4
2020-03-30 06:59:10.536975 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!

Mar 30 07:00:10.923: FAIL: gave up waiting for apiservice wardle to come up successfully
Unexpected error:
    <*errors.errorString | 0xc0001ba000>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 153 lines ...
[sig-api-machinery] Aggregator
test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] [It]
  test/e2e/framework/framework.go:597

  Mar 30 07:00:10.923: gave up waiting for apiservice wardle to come up successfully
  Unexpected error:
      <*errors.errorString | 0xc0001ba000>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  test/e2e/apimachinery/aggregator.go:401
------------------------------
{"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":283,"completed":142,"skipped":2385,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-45d7254c-4c09-458a-a88b-0b1937d3d279
STEP: Creating a pod to test consume configMaps
Mar 30 07:00:13.375: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fa239556-3966-417c-b314-52460646be49" in namespace "projected-1136" to be "Succeeded or Failed"
Mar 30 07:00:13.405: INFO: Pod "pod-projected-configmaps-fa239556-3966-417c-b314-52460646be49": Phase="Pending", Reason="", readiness=false. Elapsed: 29.588879ms
Mar 30 07:00:15.434: INFO: Pod "pod-projected-configmaps-fa239556-3966-417c-b314-52460646be49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058776501s
STEP: Saw pod success
Mar 30 07:00:15.434: INFO: Pod "pod-projected-configmaps-fa239556-3966-417c-b314-52460646be49" satisfied condition "Succeeded or Failed"
Mar 30 07:00:15.464: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod pod-projected-configmaps-fa239556-3966-417c-b314-52460646be49 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 07:00:15.537: INFO: Waiting for pod pod-projected-configmaps-fa239556-3966-417c-b314-52460646be49 to disappear
Mar 30 07:00:15.566: INFO: Pod pod-projected-configmaps-fa239556-3966-417c-b314-52460646be49 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 30 07:00:15.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1136" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":143,"skipped":2391,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] LimitRange
... skipping 31 lines ...
Mar 30 07:00:23.276: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  test/e2e/framework/framework.go:175
Mar 30 07:00:23.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-3582" for this suite.
•{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":283,"completed":144,"skipped":2422,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 07:00:23.400: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 30 07:00:23.564: INFO: Waiting up to 5m0s for pod "pod-78c2a753-8a88-4fc9-a436-3fdba0c5889f" in namespace "emptydir-2259" to be "Succeeded or Failed"
Mar 30 07:00:23.593: INFO: Pod "pod-78c2a753-8a88-4fc9-a436-3fdba0c5889f": Phase="Pending", Reason="", readiness=false. Elapsed: 29.055144ms
Mar 30 07:00:25.622: INFO: Pod "pod-78c2a753-8a88-4fc9-a436-3fdba0c5889f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.05853887s
STEP: Saw pod success
Mar 30 07:00:25.622: INFO: Pod "pod-78c2a753-8a88-4fc9-a436-3fdba0c5889f" satisfied condition "Succeeded or Failed"
Mar 30 07:00:25.652: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod pod-78c2a753-8a88-4fc9-a436-3fdba0c5889f container test-container: <nil>
STEP: delete the pod
Mar 30 07:00:25.726: INFO: Waiting for pod pod-78c2a753-8a88-4fc9-a436-3fdba0c5889f to disappear
Mar 30 07:00:25.755: INFO: Pod pod-78c2a753-8a88-4fc9-a436-3fdba0c5889f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 07:00:25.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2259" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":145,"skipped":2443,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
S
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 19 lines ...
Mar 30 07:00:36.084: INFO: stderr: ""
Mar 30 07:00:36.084: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 07:00:36.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6241" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":283,"completed":146,"skipped":2444,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-1988142f-1080-4365-9c78-d08c4a6fe760
STEP: Creating a pod to test consume secrets
Mar 30 07:00:36.363: INFO: Waiting up to 5m0s for pod "pod-secrets-4cb9332e-4a0d-42dc-a687-15f26cf2692a" in namespace "secrets-4878" to be "Succeeded or Failed"
Mar 30 07:00:36.393: INFO: Pod "pod-secrets-4cb9332e-4a0d-42dc-a687-15f26cf2692a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.110044ms
Mar 30 07:00:38.423: INFO: Pod "pod-secrets-4cb9332e-4a0d-42dc-a687-15f26cf2692a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059851633s
STEP: Saw pod success
Mar 30 07:00:38.423: INFO: Pod "pod-secrets-4cb9332e-4a0d-42dc-a687-15f26cf2692a" satisfied condition "Succeeded or Failed"
Mar 30 07:00:38.451: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-secrets-4cb9332e-4a0d-42dc-a687-15f26cf2692a container secret-env-test: <nil>
STEP: delete the pod
Mar 30 07:00:38.528: INFO: Waiting for pod pod-secrets-4cb9332e-4a0d-42dc-a687-15f26cf2692a to disappear
Mar 30 07:00:38.556: INFO: Pod pod-secrets-4cb9332e-4a0d-42dc-a687-15f26cf2692a no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Mar 30 07:00:38.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4878" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":283,"completed":147,"skipped":2446,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 30 07:00:38.647: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
Mar 30 07:00:38.775: INFO: PodSpec: initContainers in spec.initContainers
Mar 30 07:01:20.404: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-19e48da8-a8f5-48c8-810d-55e2fa5147ec", GenerateName:"", Namespace:"init-container-5128", SelfLink:"/api/v1/namespaces/init-container-5128/pods/pod-init-19e48da8-a8f5-48c8-810d-55e2fa5147ec", UID:"f7211fb0-8ff8-4c4b-ba7b-35f2fc830089", ResourceVersion:"16304", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721148438, loc:(*time.Location)(0x7b56f20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"775927029"}, Annotations:map[string]string{"cni.projectcalico.org/podIP":"192.168.130.241/32"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-z46r5", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002b11500), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-z46r5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-z46r5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-z46r5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0032fde18), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00059a460), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0032fde90)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0032fdeb0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0032fdeb8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0032fdebc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721148438, loc:(*time.Location)(0x7b56f20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721148438, loc:(*time.Location)(0x7b56f20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721148438, loc:(*time.Location)(0x7b56f20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721148438, loc:(*time.Location)(0x7b56f20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.150.0.5", PodIP:"192.168.130.241", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.130.241"}}, StartTime:(*v1.Time)(0xc001976140), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00059a540)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00059a620)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://045fca573bb331c1b8b401586826cc61c2c0a5b73d72ad79fc9e0ff9565ff8d4", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001976180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001976160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0032fdf3f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 30 07:01:20.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5128" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":283,"completed":148,"skipped":2463,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-46417743-9607-4692-8979-9a64f2159c06
STEP: Creating a pod to test consume configMaps
Mar 30 07:01:20.686: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-532bde3b-c02f-45cf-be59-b7466079fa12" in namespace "projected-6546" to be "Succeeded or Failed"
Mar 30 07:01:20.721: INFO: Pod "pod-projected-configmaps-532bde3b-c02f-45cf-be59-b7466079fa12": Phase="Pending", Reason="", readiness=false. Elapsed: 35.10284ms
Mar 30 07:01:22.750: INFO: Pod "pod-projected-configmaps-532bde3b-c02f-45cf-be59-b7466079fa12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064366119s
STEP: Saw pod success
Mar 30 07:01:22.750: INFO: Pod "pod-projected-configmaps-532bde3b-c02f-45cf-be59-b7466079fa12" satisfied condition "Succeeded or Failed"
Mar 30 07:01:22.779: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-projected-configmaps-532bde3b-c02f-45cf-be59-b7466079fa12 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 07:01:22.855: INFO: Waiting for pod pod-projected-configmaps-532bde3b-c02f-45cf-be59-b7466079fa12 to disappear
Mar 30 07:01:22.885: INFO: Pod pod-projected-configmaps-532bde3b-c02f-45cf-be59-b7466079fa12 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 30 07:01:22.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6546" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":149,"skipped":2475,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 22 lines ...
Mar 30 07:01:25.990: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Mar 30 07:01:25.990: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe pod agnhost-master-6xbcl --namespace=kubectl-7509'
Mar 30 07:01:26.265: INFO: stderr: ""
Mar 30 07:01:26.265: INFO: stdout: "Name:         agnhost-master-6xbcl\nNamespace:    kubectl-7509\nPriority:     0\nNode:         test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal/10.150.0.3\nStart Time:   Mon, 30 Mar 2020 07:01:23 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  cni.projectcalico.org/podIP: 192.168.14.124/32\nStatus:       Running\nIP:           192.168.14.124\nIPs:\n  IP:           192.168.14.124\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://f376b7a96b29c89f32763b10aa4aa77388c9810593a417f222c862e4885a0510\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 30 Mar 2020 07:01:24 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-s5c78 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-s5c78:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-s5c78\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                                                             Message\n  ----    ------     ----       ----                                                             -------\n  Normal  Scheduled  <unknown>  default-scheduler                                                Successfully assigned kubectl-7509/agnhost-master-6xbcl to test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal\n  Normal  Pulled     2s         kubelet, test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n  Normal  Created    2s         kubelet, test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal  Created container agnhost-master\n  Normal  Started    2s         kubelet, test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal  Started container agnhost-master\n"
Mar 30 07:01:26.265: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe rc agnhost-master --namespace=kubectl-7509'
Mar 30 07:01:26.596: INFO: stderr: ""
Mar 30 07:01:26.596: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-7509\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  3s    replication-controller  Created pod: agnhost-master-6xbcl\n"
Mar 30 07:01:26.597: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe service agnhost-master --namespace=kubectl-7509'
Mar 30 07:01:26.907: INFO: stderr: ""
Mar 30 07:01:26.907: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-7509\nLabels:            app=agnhost\n                   role=master\nAnnotations:       <none>\nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.101.246.122\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         192.168.14.124:6379\nSession Affinity:  None\nEvents:            <none>\n"
Mar 30 07:01:26.963: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe node test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal'
Mar 30 07:01:27.340: INFO: stderr: ""
Mar 30 07:01:27.341: INFO: stdout: "Name:               test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=n1-standard-2\n                    beta.kubernetes.io/os=linux\n                    failure-domain.beta.kubernetes.io/region=us-east4\n                    failure-domain.beta.kubernetes.io/zone=us-east4-a\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    projectcalico.org/IPv4Address: 10.150.0.2/32\n                    projectcalico.org/IPv4IPIPTunnelAddr: 192.168.154.128\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Mon, 30 Mar 2020 06:05:39 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal\n  AcquireTime:     <unset>\n  RenewTime:       Mon, 30 Mar 2020 07:01:20 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Mon, 30 Mar 2020 06:05:53 +0000   Mon, 30 Mar 2020 06:05:53 +0000   CalicoIsUp                   Calico is running on this node\n  MemoryPressure       False   Mon, 30 Mar 2020 07:00:30 +0000   Mon, 30 Mar 2020 06:05:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Mon, 30 Mar 2020 07:00:30 +0000   Mon, 30 Mar 2020 06:05:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Mon, 30 Mar 2020 07:00:30 +0000   Mon, 30 Mar 2020 06:05:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Mon, 30 Mar 2020 07:00:30 +0000   Mon, 30 Mar 2020 06:05:59 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   10.150.0.2\n  ExternalIP:   \n  InternalDNS:  test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal\n  Hostname:     test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal\nCapacity:\n  attachable-volumes-gce-pd:  127\n  cpu:                        2\n  ephemeral-storage:          30308240Ki\n  hugepages-1Gi:              0\n  hugepages-2Mi:              0\n  memory:                     7649104Ki\n  pods:                       110\nAllocatable:\n  attachable-volumes-gce-pd:  127\n  cpu:                        2\n  ephemeral-storage:          27932073938\n  hugepages-1Gi:              0\n  hugepages-2Mi:              0\n  memory:                     7546704Ki\n  pods:                       110\nSystem Info:\n  Machine ID:                 64cbb088b496ef2e737590cbf9cef703\n  System UUID:                64cbb088-b496-ef2e-7375-90cbf9cef703\n  Boot ID:                    6b79aedd-6c78-4fdd-8c0a-6ced3f83bcbc\n  Kernel Version:             5.0.0-1033-gcp\n  OS Image:                   Ubuntu 18.04.4 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3\n  Kubelet Version:            v1.16.2\n  Kube-Proxy Version:         v1.16.2\nProviderID:                   gce://k8s-jkns-gci-gce-multizone/us-east4-a/test1-controlplane-0\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                                                                  ------------  ----------  ---------------  -------------  ---\n  kube-system                 calico-kube-controllers-564b6667d7-p7bp7                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55m\n  kube-system                 calico-node-422fk                                                                     250m (12%)    0 (0%)      0 (0%)           0 (0%)         55m\n  kube-system                 coredns-5644d7b6d9-kvbt6                                                              100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55m\n  kube-system                 coredns-5644d7b6d9-swrxt                                                              100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55m\n  kube-system                 etcd-test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         55m\n  kube-system                 kube-apiserver-test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal             250m (12%)    0 (0%)      0 (0%)           0 (0%)         55m\n  kube-system                 kube-controller-manager-test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal    200m (10%)    0 (0%)      0 (0%)           0 (0%)         55m\n  kube-system                 kube-proxy-jxmzr                                                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         55m\n  kube-system                 kube-scheduler-test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal             100m (5%)     0 (0%)      0 (0%)           0 (0%)         55m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                   Requests    Limits\n  --------                   --------    ------\n  cpu                        1 (50%)     0 (0%)\n  memory                     140Mi (1%)  340Mi (4%)\n  ephemeral-storage          0 (0%)      0 (0%)\n  hugepages-1Gi              0 (0%)      0 (0%)\n  hugepages-2Mi              0 (0%)      0 (0%)\n  attachable-volumes-gce-pd  0           0\nEvents:\n  Type     Reason                   Age                From                                                                    Message\n  ----     ------                   ----               ----                                                                    -------\n  Normal   Starting                 58m                kubelet, test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal     Starting kubelet.\n  Warning  InvalidDiskCapacity      58m                kubelet, test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal     invalid capacity 0 on image filesystem\n  Normal   NodeHasNoDiskPressure    58m (x7 over 58m)  kubelet, test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal     Node test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID     58m (x7 over 58m)  kubelet, test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal     Node test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal status is now: NodeHasSufficientPID\n  Normal   NodeAllocatableEnforced  58m                kubelet, test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal     Updated Node Allocatable limit across pods\n  Normal   NodeHasSufficientMemory  58m (x8 over 58m)  kubelet, test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal     Node test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal status is now: NodeHasSufficientMemory\n  Normal   Starting                 55m                kube-proxy, test1-controlplane-0.c.k8s-jkns-gci-gce-multizone.internal  Starting kube-proxy.\n"
Mar 30 07:01:27.341: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe namespace kubectl-7509'
Mar 30 07:01:27.637: INFO: stderr: ""
Mar 30 07:01:27.637: INFO: stdout: "Name:         kubectl-7509\nLabels:       e2e-framework=kubectl\n              e2e-run=d19357e1-598c-4839-a6cc-d46a37e8950e\nAnnotations:  <none>\nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 07:01:27.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7509" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":283,"completed":150,"skipped":2475,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Events
... skipping 16 lines ...
Mar 30 07:01:34.070: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  test/e2e/framework/framework.go:175
Mar 30 07:01:34.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-8313" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":283,"completed":151,"skipped":2545,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
... skipping 11 lines ...
Mar 30 07:01:36.532: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:175
Mar 30 07:01:36.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7538" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":283,"completed":152,"skipped":2553,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 8 lines ...
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 30 07:01:39.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2606" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":283,"completed":153,"skipped":2585,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-99502f84-c19b-4fa5-ba37-43be36a9f177
STEP: Creating a pod to test consume secrets
Mar 30 07:01:39.307: INFO: Waiting up to 5m0s for pod "pod-secrets-16ddc814-05b0-42d6-a5b4-d753c595f16f" in namespace "secrets-5782" to be "Succeeded or Failed"
Mar 30 07:01:39.335: INFO: Pod "pod-secrets-16ddc814-05b0-42d6-a5b4-d753c595f16f": Phase="Pending", Reason="", readiness=false. Elapsed: 28.303628ms
Mar 30 07:01:41.365: INFO: Pod "pod-secrets-16ddc814-05b0-42d6-a5b4-d753c595f16f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058176779s
STEP: Saw pod success
Mar 30 07:01:41.365: INFO: Pod "pod-secrets-16ddc814-05b0-42d6-a5b4-d753c595f16f" satisfied condition "Succeeded or Failed"
Mar 30 07:01:41.394: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod pod-secrets-16ddc814-05b0-42d6-a5b4-d753c595f16f container secret-volume-test: <nil>
STEP: delete the pod
Mar 30 07:01:41.467: INFO: Waiting for pod pod-secrets-16ddc814-05b0-42d6-a5b4-d753c595f16f to disappear
Mar 30 07:01:41.496: INFO: Pod pod-secrets-16ddc814-05b0-42d6-a5b4-d753c595f16f no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 30 07:01:41.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5782" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":154,"skipped":2638,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 9 lines ...
Mar 30 07:01:41.784: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 30 07:01:41.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8231" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":283,"completed":155,"skipped":2642,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 13 lines ...
Mar 30 07:01:42.298: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-720 /api/v1/namespaces/watch-720/configmaps/e2e-watch-test-resource-version 012c44f7-a950-4f85-975a-bf03ff35b472 16581 0 2020-03-30 07:01:42 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 30 07:01:42.299: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-720 /api/v1/namespaces/watch-720/configmaps/e2e-watch-test-resource-version 012c44f7-a950-4f85-975a-bf03ff35b472 16583 0 2020-03-30 07:01:42 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 30 07:01:42.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-720" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":283,"completed":156,"skipped":2658,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Mar 30 07:01:42.504: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 07:01:42.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4407" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":283,"completed":157,"skipped":2666,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 30 07:01:43.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0330 07:01:43.752129   25071 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
STEP: Destroying namespace "gc-4834" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":283,"completed":158,"skipped":2671,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-da0ee85d-b226-4a9d-9d61-8cfacfc7b9f7
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 30 07:02:55.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7819" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":159,"skipped":2691,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 30 07:02:55.658: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod with failed condition
STEP: updating the pod
Mar 30 07:04:56.740: INFO: Successfully updated pod "var-expansion-05ed744c-de4e-424d-b98d-4b57e93916e6"
STEP: waiting for pod running
STEP: deleting the pod gracefully
Mar 30 07:04:58.799: INFO: Deleting pod "var-expansion-05ed744c-de4e-424d-b98d-4b57e93916e6" in namespace "var-expansion-2772"
Mar 30 07:04:58.833: INFO: Wait up to 5m0s for pod "var-expansion-05ed744c-de4e-424d-b98d-4b57e93916e6" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 30 07:05:30.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2772" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":283,"completed":160,"skipped":2702,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 26 lines ...
  test/e2e/framework/framework.go:175
Mar 30 07:05:35.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4870" for this suite.
STEP: Destroying namespace "webhook-4870-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":283,"completed":161,"skipped":2705,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 16 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 30 07:05:49.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2805" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":283,"completed":162,"skipped":2714,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Mar 30 07:05:51.648: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 30 07:05:51.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2445" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":283,"completed":163,"skipped":2736,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 25 lines ...
Mar 30 07:05:54.334: INFO: Pod "test-recreate-deployment-5f94c574ff-cj785" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-cj785 test-recreate-deployment-5f94c574ff- deployment-2055 /api/v1/namespaces/deployment-2055/pods/test-recreate-deployment-5f94c574ff-cj785 3f396056-cffc-4ea2-b66a-c2be26573dc7 17484 0 2020-03-30 07:05:54 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 0dcf54b3-7615-4f6a-8155-2f9ee40f2355 0xc002b5ef47 0xc002b5ef48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lfcpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lfcpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lfcpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:05:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:05:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:05:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:05:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:,StartTime:2020-03-30 07:05:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 30 07:05:54.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2055" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":283,"completed":164,"skipped":2738,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 100 lines ...
Mar 30 07:07:29.935: INFO: Waiting for statefulset status.replicas updated to 0
Mar 30 07:07:29.964: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 30 07:07:30.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1642" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":283,"completed":165,"skipped":2760,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 07:07:30.148: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 30 07:07:30.310: INFO: Waiting up to 5m0s for pod "pod-b270ae05-5822-4f5f-a12e-cb6e91591bbe" in namespace "emptydir-1751" to be "Succeeded or Failed"
Mar 30 07:07:30.342: INFO: Pod "pod-b270ae05-5822-4f5f-a12e-cb6e91591bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 31.321386ms
Mar 30 07:07:32.371: INFO: Pod "pod-b270ae05-5822-4f5f-a12e-cb6e91591bbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060797641s
STEP: Saw pod success
Mar 30 07:07:32.371: INFO: Pod "pod-b270ae05-5822-4f5f-a12e-cb6e91591bbe" satisfied condition "Succeeded or Failed"
Mar 30 07:07:32.400: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-b270ae05-5822-4f5f-a12e-cb6e91591bbe container test-container: <nil>
STEP: delete the pod
Mar 30 07:07:32.485: INFO: Waiting for pod pod-b270ae05-5822-4f5f-a12e-cb6e91591bbe to disappear
Mar 30 07:07:32.515: INFO: Pod pod-b270ae05-5822-4f5f-a12e-cb6e91591bbe no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 07:07:32.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1751" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":166,"skipped":2768,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 24 lines ...
Mar 30 07:07:33.666: INFO: created pod pod-service-account-nomountsa-nomountspec
Mar 30 07:07:33.666: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
Mar 30 07:07:33.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1191" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":283,"completed":167,"skipped":2771,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 30 07:07:33.920: INFO: Waiting up to 5m0s for pod "busybox-user-65534-5e762744-32f6-4839-837f-d4bb8d44f5e0" in namespace "security-context-test-9612" to be "Succeeded or Failed"
Mar 30 07:07:33.948: INFO: Pod "busybox-user-65534-5e762744-32f6-4839-837f-d4bb8d44f5e0": Phase="Pending", Reason="", readiness=false. Elapsed: 28.581698ms
Mar 30 07:07:35.978: INFO: Pod "busybox-user-65534-5e762744-32f6-4839-837f-d4bb8d44f5e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058141889s
Mar 30 07:07:38.007: INFO: Pod "busybox-user-65534-5e762744-32f6-4839-837f-d4bb8d44f5e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08730822s
Mar 30 07:07:38.007: INFO: Pod "busybox-user-65534-5e762744-32f6-4839-837f-d4bb8d44f5e0" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 30 07:07:38.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9612" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":168,"skipped":2788,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 30 07:07:54.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9666" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":283,"completed":169,"skipped":2792,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 9 lines ...
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  test/e2e/framework/framework.go:175
Mar 30 07:07:55.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5768" for this suite.
STEP: Destroying namespace "nspatchtest-41608f55-e2d0-4a0b-ab38-b7b078933a0d-2095" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":283,"completed":170,"skipped":2805,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-bb57a9a5-f341-4388-8bb1-42ffd8e1e30b
STEP: Creating a pod to test consume configMaps
Mar 30 07:07:55.376: INFO: Waiting up to 5m0s for pod "pod-configmaps-ed5e3d44-8562-4014-9593-69d41941e6da" in namespace "configmap-862" to be "Succeeded or Failed"
Mar 30 07:07:55.407: INFO: Pod "pod-configmaps-ed5e3d44-8562-4014-9593-69d41941e6da": Phase="Pending", Reason="", readiness=false. Elapsed: 30.724931ms
Mar 30 07:07:57.437: INFO: Pod "pod-configmaps-ed5e3d44-8562-4014-9593-69d41941e6da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06078241s
STEP: Saw pod success
Mar 30 07:07:57.437: INFO: Pod "pod-configmaps-ed5e3d44-8562-4014-9593-69d41941e6da" satisfied condition "Succeeded or Failed"
Mar 30 07:07:57.466: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod pod-configmaps-ed5e3d44-8562-4014-9593-69d41941e6da container configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 07:07:57.554: INFO: Waiting for pod pod-configmaps-ed5e3d44-8562-4014-9593-69d41941e6da to disappear
Mar 30 07:07:57.585: INFO: Pod pod-configmaps-ed5e3d44-8562-4014-9593-69d41941e6da no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 07:07:57.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-862" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":171,"skipped":2814,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 9 lines ...
STEP: Updating configmap projected-configmap-test-upd-b2fc84fb-6d64-46f8-9b87-8f4235d081eb
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 30 07:08:02.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2142" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":172,"skipped":2814,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 11 lines ...
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 07:08:23.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5318" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":283,"completed":173,"skipped":2820,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 17 lines ...
Mar 30 07:10:42.169: INFO: Restart count of pod container-probe-7493/liveness-684fb158-c151-4bd9-89dd-c3c1191a0840 is now 5 (2m16.005307153s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 30 07:10:42.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7493" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":283,"completed":174,"skipped":2822,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 07:10:42.300: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 30 07:10:42.463: INFO: Waiting up to 5m0s for pod "pod-8228e0dc-2143-4196-a12c-1d4097c94856" in namespace "emptydir-9385" to be "Succeeded or Failed"
Mar 30 07:10:42.493: INFO: Pod "pod-8228e0dc-2143-4196-a12c-1d4097c94856": Phase="Pending", Reason="", readiness=false. Elapsed: 29.227341ms
Mar 30 07:10:44.522: INFO: Pod "pod-8228e0dc-2143-4196-a12c-1d4097c94856": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058593447s
STEP: Saw pod success
Mar 30 07:10:44.522: INFO: Pod "pod-8228e0dc-2143-4196-a12c-1d4097c94856" satisfied condition "Succeeded or Failed"
Mar 30 07:10:44.551: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod pod-8228e0dc-2143-4196-a12c-1d4097c94856 container test-container: <nil>
STEP: delete the pod
Mar 30 07:10:44.633: INFO: Waiting for pod pod-8228e0dc-2143-4196-a12c-1d4097c94856 to disappear
Mar 30 07:10:44.664: INFO: Pod pod-8228e0dc-2143-4196-a12c-1d4097c94856 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 07:10:44.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9385" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":175,"skipped":2841,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-e87e61eb-2105-401f-b9b1-c60b29067a70
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 30 07:12:04.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4424" for this suite.
•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":176,"skipped":2841,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 12 lines ...
Mar 30 07:12:07.144: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 30 07:12:07.400: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 07:12:07.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5600" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":283,"completed":177,"skipped":2846,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-5gv2
STEP: Creating a pod to test atomic-volume-subpath
Mar 30 07:12:07.712: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5gv2" in namespace "subpath-5668" to be "Succeeded or Failed"
Mar 30 07:12:07.740: INFO: Pod "pod-subpath-test-configmap-5gv2": Phase="Pending", Reason="", readiness=false. Elapsed: 28.946964ms
Mar 30 07:12:09.771: INFO: Pod "pod-subpath-test-configmap-5gv2": Phase="Running", Reason="", readiness=true. Elapsed: 2.059039737s
Mar 30 07:12:11.800: INFO: Pod "pod-subpath-test-configmap-5gv2": Phase="Running", Reason="", readiness=true. Elapsed: 4.088679247s
Mar 30 07:12:13.830: INFO: Pod "pod-subpath-test-configmap-5gv2": Phase="Running", Reason="", readiness=true. Elapsed: 6.118187727s
Mar 30 07:12:15.860: INFO: Pod "pod-subpath-test-configmap-5gv2": Phase="Running", Reason="", readiness=true. Elapsed: 8.148010741s
Mar 30 07:12:17.889: INFO: Pod "pod-subpath-test-configmap-5gv2": Phase="Running", Reason="", readiness=true. Elapsed: 10.177421052s
Mar 30 07:12:19.918: INFO: Pod "pod-subpath-test-configmap-5gv2": Phase="Running", Reason="", readiness=true. Elapsed: 12.20682159s
Mar 30 07:12:21.948: INFO: Pod "pod-subpath-test-configmap-5gv2": Phase="Running", Reason="", readiness=true. Elapsed: 14.236047294s
Mar 30 07:12:23.977: INFO: Pod "pod-subpath-test-configmap-5gv2": Phase="Running", Reason="", readiness=true. Elapsed: 16.265641379s
Mar 30 07:12:26.007: INFO: Pod "pod-subpath-test-configmap-5gv2": Phase="Running", Reason="", readiness=true. Elapsed: 18.295413404s
Mar 30 07:12:28.037: INFO: Pod "pod-subpath-test-configmap-5gv2": Phase="Running", Reason="", readiness=true. Elapsed: 20.325030131s
Mar 30 07:12:30.066: INFO: Pod "pod-subpath-test-configmap-5gv2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.354303028s
STEP: Saw pod success
Mar 30 07:12:30.066: INFO: Pod "pod-subpath-test-configmap-5gv2" satisfied condition "Succeeded or Failed"
Mar 30 07:12:30.095: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod pod-subpath-test-configmap-5gv2 container test-container-subpath-configmap-5gv2: <nil>
STEP: delete the pod
Mar 30 07:12:30.186: INFO: Waiting for pod pod-subpath-test-configmap-5gv2 to disappear
Mar 30 07:12:30.215: INFO: Pod pod-subpath-test-configmap-5gv2 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-5gv2
Mar 30 07:12:30.215: INFO: Deleting pod "pod-subpath-test-configmap-5gv2" in namespace "subpath-5668"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 30 07:12:30.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5668" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":283,"completed":178,"skipped":2861,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 27 lines ...
  test/e2e/framework/framework.go:175
Mar 30 07:12:45.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3135" for this suite.
STEP: Destroying namespace "webhook-3135-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":283,"completed":179,"skipped":2899,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 30 07:12:47.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-68" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":283,"completed":180,"skipped":2902,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
S
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 30 07:12:48.028: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
Mar 30 07:14:48.276: INFO: Deleting pod "var-expansion-908ff571-28e9-4624-b0f0-75e363434e03" in namespace "var-expansion-4589"
Mar 30 07:14:48.312: INFO: Wait up to 5m0s for pod "var-expansion-908ff571-28e9-4624-b0f0-75e363434e03" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 30 07:14:52.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4589" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":283,"completed":181,"skipped":2903,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 07:14:52.460: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on node default medium
Mar 30 07:14:52.623: INFO: Waiting up to 5m0s for pod "pod-77f3e93b-2cba-42bd-899b-6dcea5c2e9ef" in namespace "emptydir-2256" to be "Succeeded or Failed"
Mar 30 07:14:52.655: INFO: Pod "pod-77f3e93b-2cba-42bd-899b-6dcea5c2e9ef": Phase="Pending", Reason="", readiness=false. Elapsed: 32.096835ms
Mar 30 07:14:54.684: INFO: Pod "pod-77f3e93b-2cba-42bd-899b-6dcea5c2e9ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0609949s
STEP: Saw pod success
Mar 30 07:14:54.684: INFO: Pod "pod-77f3e93b-2cba-42bd-899b-6dcea5c2e9ef" satisfied condition "Succeeded or Failed"
Mar 30 07:14:54.712: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-77f3e93b-2cba-42bd-899b-6dcea5c2e9ef container test-container: <nil>
STEP: delete the pod
Mar 30 07:14:54.795: INFO: Waiting for pod pod-77f3e93b-2cba-42bd-899b-6dcea5c2e9ef to disappear
Mar 30 07:14:54.824: INFO: Pod pod-77f3e93b-2cba-42bd-899b-6dcea5c2e9ef no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 07:14:54.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2256" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":182,"skipped":2904,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 07:14:55.076: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a32b42cb-f006-4619-9029-8b2dd5220808" in namespace "projected-234" to be "Succeeded or Failed"
Mar 30 07:14:55.106: INFO: Pod "downwardapi-volume-a32b42cb-f006-4619-9029-8b2dd5220808": Phase="Pending", Reason="", readiness=false. Elapsed: 30.161006ms
Mar 30 07:14:57.136: INFO: Pod "downwardapi-volume-a32b42cb-f006-4619-9029-8b2dd5220808": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.05975682s
STEP: Saw pod success
Mar 30 07:14:57.136: INFO: Pod "downwardapi-volume-a32b42cb-f006-4619-9029-8b2dd5220808" satisfied condition "Succeeded or Failed"
Mar 30 07:14:57.165: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod downwardapi-volume-a32b42cb-f006-4619-9029-8b2dd5220808 container client-container: <nil>
STEP: delete the pod
Mar 30 07:14:57.238: INFO: Waiting for pod downwardapi-volume-a32b42cb-f006-4619-9029-8b2dd5220808 to disappear
Mar 30 07:14:57.268: INFO: Pod downwardapi-volume-a32b42cb-f006-4619-9029-8b2dd5220808 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 30 07:14:57.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-234" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":283,"completed":183,"skipped":2915,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Mar 30 07:15:02.874: INFO: stderr: ""
Mar 30 07:15:02.874: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-454-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 07:15:05.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8270" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":283,"completed":184,"skipped":2932,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Mar 30 07:15:05.746: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override arguments
Mar 30 07:15:05.934: INFO: Waiting up to 5m0s for pod "client-containers-d569e296-e127-4d41-af24-815b2e12b53d" in namespace "containers-7297" to be "Succeeded or Failed"
Mar 30 07:15:05.966: INFO: Pod "client-containers-d569e296-e127-4d41-af24-815b2e12b53d": Phase="Pending", Reason="", readiness=false. Elapsed: 31.482361ms
Mar 30 07:15:07.995: INFO: Pod "client-containers-d569e296-e127-4d41-af24-815b2e12b53d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061217371s
STEP: Saw pod success
Mar 30 07:15:07.996: INFO: Pod "client-containers-d569e296-e127-4d41-af24-815b2e12b53d" satisfied condition "Succeeded or Failed"
Mar 30 07:15:08.025: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod client-containers-d569e296-e127-4d41-af24-815b2e12b53d container test-container: <nil>
STEP: delete the pod
Mar 30 07:15:08.098: INFO: Waiting for pod client-containers-d569e296-e127-4d41-af24-815b2e12b53d to disappear
Mar 30 07:15:08.128: INFO: Pod client-containers-d569e296-e127-4d41-af24-815b2e12b53d no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 30 07:15:08.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7297" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":283,"completed":185,"skipped":2976,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Mar 30 07:15:08.216: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test env composition
Mar 30 07:15:08.379: INFO: Waiting up to 5m0s for pod "var-expansion-6c2ec28d-579c-4594-81d7-0e8bf12e2921" in namespace "var-expansion-3009" to be "Succeeded or Failed"
Mar 30 07:15:08.407: INFO: Pod "var-expansion-6c2ec28d-579c-4594-81d7-0e8bf12e2921": Phase="Pending", Reason="", readiness=false. Elapsed: 28.401856ms
Mar 30 07:15:10.437: INFO: Pod "var-expansion-6c2ec28d-579c-4594-81d7-0e8bf12e2921": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.057877544s
STEP: Saw pod success
Mar 30 07:15:10.437: INFO: Pod "var-expansion-6c2ec28d-579c-4594-81d7-0e8bf12e2921" satisfied condition "Succeeded or Failed"
Mar 30 07:15:10.466: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod var-expansion-6c2ec28d-579c-4594-81d7-0e8bf12e2921 container dapi-container: <nil>
STEP: delete the pod
Mar 30 07:15:10.550: INFO: Waiting for pod var-expansion-6c2ec28d-579c-4594-81d7-0e8bf12e2921 to disappear
Mar 30 07:15:10.580: INFO: Pod var-expansion-6c2ec28d-579c-4594-81d7-0e8bf12e2921 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 30 07:15:10.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3009" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":283,"completed":186,"skipped":2992,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 19 lines ...
Mar 30 07:15:14.040: INFO: Deleting pod "var-expansion-56b16854-6e0f-4e1d-b351-7c591df6a68a" in namespace "var-expansion-5798"
Mar 30 07:15:14.073: INFO: Wait up to 5m0s for pod "var-expansion-56b16854-6e0f-4e1d-b351-7c591df6a68a" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 30 07:15:56.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5798" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":283,"completed":187,"skipped":3031,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-0e8553e1-f564-4e08-8ccd-1b51dd617e1e
STEP: Creating a pod to test consume configMaps
Mar 30 07:15:56.414: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b20f4fe5-002c-4f08-b160-9f2b0886d2a2" in namespace "projected-1728" to be "Succeeded or Failed"
Mar 30 07:15:56.444: INFO: Pod "pod-projected-configmaps-b20f4fe5-002c-4f08-b160-9f2b0886d2a2": Phase="Pending", Reason="", readiness=false. Elapsed: 29.210589ms
Mar 30 07:15:58.473: INFO: Pod "pod-projected-configmaps-b20f4fe5-002c-4f08-b160-9f2b0886d2a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058713459s
STEP: Saw pod success
Mar 30 07:15:58.473: INFO: Pod "pod-projected-configmaps-b20f4fe5-002c-4f08-b160-9f2b0886d2a2" satisfied condition "Succeeded or Failed"
Mar 30 07:15:58.503: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-projected-configmaps-b20f4fe5-002c-4f08-b160-9f2b0886d2a2 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 07:15:58.578: INFO: Waiting for pod pod-projected-configmaps-b20f4fe5-002c-4f08-b160-9f2b0886d2a2 to disappear
Mar 30 07:15:58.608: INFO: Pod pod-projected-configmaps-b20f4fe5-002c-4f08-b160-9f2b0886d2a2 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 30 07:15:58.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1728" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":283,"completed":188,"skipped":3045,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-3a6a51f3-f725-4943-8153-8bff1a0bc312
STEP: Creating a pod to test consume configMaps
Mar 30 07:15:58.894: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ba283c49-bb2e-4be9-82ec-c2a77bbfe4b1" in namespace "projected-5775" to be "Succeeded or Failed"
Mar 30 07:15:58.927: INFO: Pod "pod-projected-configmaps-ba283c49-bb2e-4be9-82ec-c2a77bbfe4b1": Phase="Pending", Reason="", readiness=false. Elapsed: 33.42941ms
Mar 30 07:16:00.957: INFO: Pod "pod-projected-configmaps-ba283c49-bb2e-4be9-82ec-c2a77bbfe4b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062978112s
Mar 30 07:16:02.986: INFO: Pod "pod-projected-configmaps-ba283c49-bb2e-4be9-82ec-c2a77bbfe4b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092650695s
STEP: Saw pod success
Mar 30 07:16:02.986: INFO: Pod "pod-projected-configmaps-ba283c49-bb2e-4be9-82ec-c2a77bbfe4b1" satisfied condition "Succeeded or Failed"
Mar 30 07:16:03.016: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod pod-projected-configmaps-ba283c49-bb2e-4be9-82ec-c2a77bbfe4b1 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 07:16:03.091: INFO: Waiting for pod pod-projected-configmaps-ba283c49-bb2e-4be9-82ec-c2a77bbfe4b1 to disappear
Mar 30 07:16:03.121: INFO: Pod pod-projected-configmaps-ba283c49-bb2e-4be9-82ec-c2a77bbfe4b1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 30 07:16:03.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5775" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":283,"completed":189,"skipped":3050,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 30 07:16:03.210: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar 30 07:16:03.369: INFO: Waiting up to 5m0s for pod "pod-d0024853-a5da-4eb1-ac74-a90d19bee33e" in namespace "emptydir-7026" to be "Succeeded or Failed"
Mar 30 07:16:03.401: INFO: Pod "pod-d0024853-a5da-4eb1-ac74-a90d19bee33e": Phase="Pending", Reason="", readiness=false. Elapsed: 32.236979ms
Mar 30 07:16:05.431: INFO: Pod "pod-d0024853-a5da-4eb1-ac74-a90d19bee33e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062004575s
STEP: Saw pod success
Mar 30 07:16:05.431: INFO: Pod "pod-d0024853-a5da-4eb1-ac74-a90d19bee33e" satisfied condition "Succeeded or Failed"
Mar 30 07:16:05.460: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod pod-d0024853-a5da-4eb1-ac74-a90d19bee33e container test-container: <nil>
STEP: delete the pod
Mar 30 07:16:05.540: INFO: Waiting for pod pod-d0024853-a5da-4eb1-ac74-a90d19bee33e to disappear
Mar 30 07:16:05.569: INFO: Pod pod-d0024853-a5da-4eb1-ac74-a90d19bee33e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 30 07:16:05.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7026" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":190,"skipped":3050,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 30 07:16:05.661: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
Mar 30 07:18:05.927: INFO: Deleting pod "var-expansion-1585152b-2d92-46e8-a472-410c2fe66947" in namespace "var-expansion-7738"
Mar 30 07:18:05.962: INFO: Wait up to 5m0s for pod "var-expansion-1585152b-2d92-46e8-a472-410c2fe66947" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 30 07:18:16.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7738" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":283,"completed":191,"skipped":3107,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
... skipping 11 lines ...
Mar 30 07:18:18.361: INFO: Trying to dial the pod
Mar 30 07:18:23.451: INFO: Controller my-hostname-basic-9e942914-cf5f-462e-8bed-903595f403a0: Got expected result from replica 1 [my-hostname-basic-9e942914-cf5f-462e-8bed-903595f403a0-p94bp]: "my-hostname-basic-9e942914-cf5f-462e-8bed-903595f403a0-p94bp", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:175
Mar 30 07:18:23.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7633" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":283,"completed":192,"skipped":3120,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 12 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-3468
STEP: Creating statefulset with conflicting port in namespace statefulset-3468
STEP: Waiting until pod test-pod will start running in namespace statefulset-3468
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3468
Mar 30 07:18:27.891: INFO: Observed stateful pod in namespace: statefulset-3468, name: ss-0, uid: 95c052de-c7d6-4b92-865f-d91bc30ee923, status phase: Pending. Waiting for statefulset controller to delete.
Mar 30 07:18:28.177: INFO: Observed stateful pod in namespace: statefulset-3468, name: ss-0, uid: 95c052de-c7d6-4b92-865f-d91bc30ee923, status phase: Failed. Waiting for statefulset controller to delete.
Mar 30 07:18:28.187: INFO: Observed stateful pod in namespace: statefulset-3468, name: ss-0, uid: 95c052de-c7d6-4b92-865f-d91bc30ee923, status phase: Failed. Waiting for statefulset controller to delete.
Mar 30 07:18:28.193: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3468
STEP: Removing pod with conflicting port in namespace statefulset-3468
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3468 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:110
Mar 30 07:18:32.326: INFO: Deleting all statefulset in ns statefulset-3468
Mar 30 07:18:32.357: INFO: Scaling statefulset ss to 0
Mar 30 07:18:52.480: INFO: Waiting for statefulset status.replicas updated to 0
Mar 30 07:18:52.509: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 30 07:18:52.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3468" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":283,"completed":193,"skipped":3125,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 60 lines ...
Mar 30 07:19:00.498: INFO: stderr: ""
Mar 30 07:19:00.498: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 07:19:00.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8966" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":283,"completed":194,"skipped":3152,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
Mar 30 07:19:07.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3281" for this suite.
STEP: Destroying namespace "webhook-3281-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":283,"completed":195,"skipped":3156,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 30 07:19:08.484: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-08462305-1c61-4af0-b638-7b6127cb36d3" in namespace "security-context-test-8534" to be "Succeeded or Failed"
Mar 30 07:19:08.514: INFO: Pod "alpine-nnp-false-08462305-1c61-4af0-b638-7b6127cb36d3": Phase="Pending", Reason="", readiness=false. Elapsed: 29.476505ms
Mar 30 07:19:10.543: INFO: Pod "alpine-nnp-false-08462305-1c61-4af0-b638-7b6127cb36d3": Phase="Running", Reason="", readiness=true. Elapsed: 2.05880584s
Mar 30 07:19:12.576: INFO: Pod "alpine-nnp-false-08462305-1c61-4af0-b638-7b6127cb36d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091798932s
Mar 30 07:19:12.576: INFO: Pod "alpine-nnp-false-08462305-1c61-4af0-b638-7b6127cb36d3" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 30 07:19:12.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8534" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":196,"skipped":3231,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 34 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 30 07:19:21.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6369" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":283,"completed":197,"skipped":3246,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 30 07:19:21.739: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 30 07:19:21.900: INFO: Waiting up to 5m0s for pod "downward-api-1ac6b2d1-4356-487a-957f-cc4b45a116ea" in namespace "downward-api-6417" to be "Succeeded or Failed"
Mar 30 07:19:21.928: INFO: Pod "downward-api-1ac6b2d1-4356-487a-957f-cc4b45a116ea": Phase="Pending", Reason="", readiness=false. Elapsed: 28.383869ms
Mar 30 07:19:23.958: INFO: Pod "downward-api-1ac6b2d1-4356-487a-957f-cc4b45a116ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.057887278s
STEP: Saw pod success
Mar 30 07:19:23.958: INFO: Pod "downward-api-1ac6b2d1-4356-487a-957f-cc4b45a116ea" satisfied condition "Succeeded or Failed"
Mar 30 07:19:23.987: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod downward-api-1ac6b2d1-4356-487a-957f-cc4b45a116ea container dapi-container: <nil>
STEP: delete the pod
Mar 30 07:19:24.067: INFO: Waiting for pod downward-api-1ac6b2d1-4356-487a-957f-cc4b45a116ea to disappear
Mar 30 07:19:24.096: INFO: Pod downward-api-1ac6b2d1-4356-487a-957f-cc4b45a116ea no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 30 07:19:24.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6417" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":283,"completed":198,"skipped":3253,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 30 07:19:24.317: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 07:19:24.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6912" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":283,"completed":199,"skipped":3268,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Mar 30 07:19:25.458: INFO: stderr: ""
Mar 30 07:19:25.458: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncrd.projectcalico.org/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 07:19:25.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6805" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":283,"completed":200,"skipped":3283,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-24ce676f-763a-4e99-aa21-7476fa9d8d21
STEP: Creating a pod to test consume configMaps
Mar 30 07:19:25.755: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-225916f5-afd8-4f22-bb32-558aa7c2f56f" in namespace "projected-947" to be "Succeeded or Failed"
Mar 30 07:19:25.786: INFO: Pod "pod-projected-configmaps-225916f5-afd8-4f22-bb32-558aa7c2f56f": Phase="Pending", Reason="", readiness=false. Elapsed: 31.34112ms
Mar 30 07:19:27.816: INFO: Pod "pod-projected-configmaps-225916f5-afd8-4f22-bb32-558aa7c2f56f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060839067s
STEP: Saw pod success
Mar 30 07:19:27.816: INFO: Pod "pod-projected-configmaps-225916f5-afd8-4f22-bb32-558aa7c2f56f" satisfied condition "Succeeded or Failed"
Mar 30 07:19:27.845: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-projected-configmaps-225916f5-afd8-4f22-bb32-558aa7c2f56f container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 07:19:27.925: INFO: Waiting for pod pod-projected-configmaps-225916f5-afd8-4f22-bb32-558aa7c2f56f to disappear
Mar 30 07:19:27.955: INFO: Pod pod-projected-configmaps-225916f5-afd8-4f22-bb32-558aa7c2f56f no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 30 07:19:27.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-947" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":201,"skipped":3298,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 22 lines ...
Mar 30 07:19:48.615: INFO: Waiting for statefulset status.replicas updated to 0
Mar 30 07:19:48.644: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 30 07:19:48.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5156" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":283,"completed":202,"skipped":3325,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Service endpoints latency
... skipping 417 lines ...
Mar 30 07:19:59.799: INFO: 99 %ile: 951.702361ms
Mar 30 07:19:59.799: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  test/e2e/framework/framework.go:175
Mar 30 07:19:59.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-8454" for this suite.
•{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":283,"completed":203,"skipped":3325,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Mar 30 07:20:00.024: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 30 07:20:02.800: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 07:20:16.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7055" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":283,"completed":204,"skipped":3332,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 12 lines ...
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 07:20:17.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1939" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":283,"completed":205,"skipped":3414,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 26 lines ...
Mar 30 07:21:07.618: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8599 /api/v1/namespaces/watch-8599/configmaps/e2e-watch-test-configmap-b 8977b9bc-2034-45d5-8984-0cff7f156dc8 22518 0 2020-03-30 07:20:57 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 30 07:21:07.618: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8599 /api/v1/namespaces/watch-8599/configmaps/e2e-watch-test-configmap-b 8977b9bc-2034-45d5-8984-0cff7f156dc8 22518 0 2020-03-30 07:20:57 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 30 07:21:17.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8599" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":283,"completed":206,"skipped":3414,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Mar 30 07:22:06.668: INFO: Restart count of pod container-probe-6988/busybox-2758a3b1-1381-4521-8422-d511b184129b is now 1 (46.708475051s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 30 07:22:06.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6988" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":283,"completed":207,"skipped":3432,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Mar 30 07:22:06.920: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 30 07:22:10.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9226" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":283,"completed":208,"skipped":3461,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
S
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 76 lines ...
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-sglf7 webserver-deployment-595b5b9587- deployment-8708 /api/v1/namespaces/deployment-8708/pods/webserver-deployment-595b5b9587-sglf7 bbd2af81-abfb-42d2-8d43-30e5253bb0c8 22859 0 2020-03-30 07:22:10 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:192.168.14.98/32] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 844933fd-2f7d-4f46-8096-a1f6df0dcecd 0xc0042885c0 0xc0042885c1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l5x4p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l5x4p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l5x4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:192.168.14.98,StartTime:2020-03-30 07:22:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-30 07:22:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://189dafeb8d5d6670dd60d02e29638ae89265539d0f6f1d8b38ab7ad93cfe8345,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.14.98,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 30 07:22:19.694: INFO: Pod "webserver-deployment-595b5b9587-w9vfc" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-w9vfc webserver-deployment-595b5b9587- deployment-8708 /api/v1/namespaces/deployment-8708/pods/webserver-deployment-595b5b9587-w9vfc 7ff393c5-fff8-433c-92e2-59ff8726e3a5 23063 0 2020-03-30 07:22:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 844933fd-2f7d-4f46-8096-a1f6df0dcecd 0xc004288720 0xc004288721}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l5x4p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l5x4p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l5x4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 30 07:22:19.694: INFO: Pod "webserver-deployment-c7997dcc8-4hwzc" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4hwzc webserver-deployment-c7997dcc8- deployment-8708 /api/v1/namespaces/deployment-8708/pods/webserver-deployment-c7997dcc8-4hwzc 66332b1a-00e1-4c00-aa31-4bcbade28999 23210 0 2020-03-30 07:22:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.14.106/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9075af68-fc57-48d9-a82d-83286dd6b83a 0xc004288830 0xc004288831}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l5x4p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l5x4p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l5x4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 30 07:22:19.694: INFO: Pod "webserver-deployment-c7997dcc8-4zmvw" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4zmvw webserver-deployment-c7997dcc8- deployment-8708 /api/v1/namespaces/deployment-8708/pods/webserver-deployment-c7997dcc8-4zmvw 15d4cfdc-54d5-465a-9752-456fc44a27dc 22998 0 2020-03-30 07:22:15 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.130.225/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9075af68-fc57-48d9-a82d-83286dd6b83a 0xc004288940 0xc004288941}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l5x4p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l5x4p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l5x4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.5,PodIP:192.168.130.225,StartTime:2020-03-30 07:22:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.130.225,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 30 07:22:19.694: INFO: Pod "webserver-deployment-c7997dcc8-6x4jp" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6x4jp webserver-deployment-c7997dcc8- deployment-8708 /api/v1/namespaces/deployment-8708/pods/webserver-deployment-c7997dcc8-6x4jp 1bb974f7-5ebe-4af1-b6b3-1ed631c00bac 23098 0 2020-03-30 07:22:15 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.14.96/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9075af68-fc57-48d9-a82d-83286dd6b83a 0xc004288af0 0xc004288af1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l5x4p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l5x4p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l5x4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:192.168.14.96,StartTime:2020-03-30 07:22:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "webserver:404",},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.14.96,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 30 07:22:19.694: INFO: Pod "webserver-deployment-c7997dcc8-8tfnv" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8tfnv webserver-deployment-c7997dcc8- deployment-8708 /api/v1/namespaces/deployment-8708/pods/webserver-deployment-c7997dcc8-8tfnv bca49256-2886-4314-8641-122de2edc67e 22995 0 2020-03-30 07:22:15 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.130.222/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9075af68-fc57-48d9-a82d-83286dd6b83a 0xc004288c80 0xc004288c81}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l5x4p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l5x4p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l5x4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.5,PodIP:192.168.130.222,StartTime:2020-03-30 07:22:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.130.222,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 30 07:22:19.695: INFO: Pod "webserver-deployment-c7997dcc8-b5rs6" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-b5rs6 webserver-deployment-c7997dcc8- deployment-8708 /api/v1/namespaces/deployment-8708/pods/webserver-deployment-c7997dcc8-b5rs6 ac0af287-fa7d-44cf-87e4-6f7a881a04c2 23000 0 2020-03-30 07:22:15 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.130.221/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9075af68-fc57-48d9-a82d-83286dd6b83a 0xc004288e10 0xc004288e11}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l5x4p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l5x4p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l5x4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.5,PodIP:192.168.130.221,StartTime:2020-03-30 07:22:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.130.221,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 30 07:22:19.695: INFO: Pod "webserver-deployment-c7997dcc8-bn4sq" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bn4sq webserver-deployment-c7997dcc8- deployment-8708 /api/v1/namespaces/deployment-8708/pods/webserver-deployment-c7997dcc8-bn4sq fd564c85-7460-4f82-bbce-acaa86c3fd7e 23093 0 2020-03-30 07:22:15 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.14.97/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9075af68-fc57-48d9-a82d-83286dd6b83a 0xc004288fb0 0xc004288fb1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l5x4p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l5x4p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l5x4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:192.168.14.97,StartTime:2020-03-30 07:22:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "webserver:404",},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.14.97,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 30 07:22:19.695: INFO: Pod "webserver-deployment-c7997dcc8-bt8g2" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bt8g2 webserver-deployment-c7997dcc8- deployment-8708 /api/v1/namespaces/deployment-8708/pods/webserver-deployment-c7997dcc8-bt8g2 103455c3-c180-4777-96e9-0550e6b2b909 23201 0 2020-03-30 07:22:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.130.235/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9075af68-fc57-48d9-a82d-83286dd6b83a 0xc004289140 0xc004289141}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l5x4p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l5x4p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l5x4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 30 07:22:19.695: INFO: Pod "webserver-deployment-c7997dcc8-gcb2b" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gcb2b webserver-deployment-c7997dcc8- deployment-8708 /api/v1/namespaces/deployment-8708/pods/webserver-deployment-c7997dcc8-gcb2b 00e71bac-8e1a-47e0-ba6b-02404955bc27 23137 0 2020-03-30 07:22:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.130.226/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9075af68-fc57-48d9-a82d-83286dd6b83a 0xc004289250 0xc004289251}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l5x4p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l5x4p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l5x4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.5,PodIP:,StartTime:2020-03-30 07:22:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 8 lines ...
Mar 30 07:22:19.696: INFO: Pod "webserver-deployment-c7997dcc8-zrdjm" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zrdjm webserver-deployment-c7997dcc8- deployment-8708 /api/v1/namespaces/deployment-8708/pods/webserver-deployment-c7997dcc8-zrdjm faddbb25-f31c-4a18-86ee-1a66a0ff2c1d 23193 0 2020-03-30 07:22:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.14.102/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9075af68-fc57-48d9-a82d-83286dd6b83a 0xc0042898a0 0xc0042898a1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l5x4p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l5x4p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l5x4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 07:22:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:,StartTime:2020-03-30 07:22:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 30 07:22:19.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8708" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":283,"completed":209,"skipped":3462,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 181 lines ...
Mar 30 07:22:56.263: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-32/pods","resourceVersion":"23767"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 30 07:22:56.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-32" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":283,"completed":210,"skipped":3548,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 30 07:22:56.447: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
Mar 30 07:22:56.575: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 30 07:22:58.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9751" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":283,"completed":211,"skipped":3562,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
  test/e2e/framework/framework.go:175
Mar 30 07:23:03.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5960" for this suite.
STEP: Destroying namespace "webhook-5960-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":283,"completed":212,"skipped":3580,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 13 lines ...
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 30 07:23:20.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1435" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":283,"completed":213,"skipped":3580,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 07:23:21.008: INFO: Waiting up to 5m0s for pod "downwardapi-volume-32eea874-bb82-4b4c-85ec-01e9f7854fbc" in namespace "downward-api-1439" to be "Succeeded or Failed"
Mar 30 07:23:21.037: INFO: Pod "downwardapi-volume-32eea874-bb82-4b4c-85ec-01e9f7854fbc": Phase="Pending", Reason="", readiness=false. Elapsed: 28.373417ms
Mar 30 07:23:23.067: INFO: Pod "downwardapi-volume-32eea874-bb82-4b4c-85ec-01e9f7854fbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058651887s
STEP: Saw pod success
Mar 30 07:23:23.067: INFO: Pod "downwardapi-volume-32eea874-bb82-4b4c-85ec-01e9f7854fbc" satisfied condition "Succeeded or Failed"
Mar 30 07:23:23.096: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod downwardapi-volume-32eea874-bb82-4b4c-85ec-01e9f7854fbc container client-container: <nil>
STEP: delete the pod
Mar 30 07:23:23.182: INFO: Waiting for pod downwardapi-volume-32eea874-bb82-4b4c-85ec-01e9f7854fbc to disappear
Mar 30 07:23:23.212: INFO: Pod downwardapi-volume-32eea874-bb82-4b4c-85ec-01e9f7854fbc no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 30 07:23:23.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1439" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":214,"skipped":3583,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 19 lines ...
Mar 30 07:23:26.755: INFO: Pod "adopt-release-2892f": Phase="Running", Reason="", readiness=true. Elapsed: 29.41409ms
Mar 30 07:23:26.755: INFO: Pod "adopt-release-2892f" satisfied condition "released"
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Mar 30 07:23:26.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7193" for this suite.
•{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":283,"completed":215,"skipped":3615,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] PreStop
... skipping 25 lines ...
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  test/e2e/framework/framework.go:175
Mar 30 07:23:36.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-6113" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":283,"completed":216,"skipped":3649,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 30 07:23:36.375: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename webhook
... skipping 6 lines ...
STEP: Wait for the deployment to be ready
Mar 30 07:23:37.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721149817, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721149817, loc:(*time.Location)(0x7b56f20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721149817, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721149817, loc:(*time.Location)(0x7b56f20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 30 07:23:39.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721149817, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721149817, loc:(*time.Location)(0x7b56f20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721149817, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721149817, loc:(*time.Location)(0x7b56f20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 30 07:23:42.700: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 07:23:42.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-843" for this suite.
STEP: Destroying namespace "webhook-843-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":283,"completed":217,"skipped":3663,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Mar 30 07:23:46.124: INFO: Successfully updated pod "annotationupdate87c03771-d1ed-46eb-9035-548797b132f7"
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 30 07:23:50.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1845" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":283,"completed":218,"skipped":3668,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-73597c72-a809-4947-88a5-033af11e5abe
STEP: Creating a pod to test consume configMaps
Mar 30 07:23:50.507: INFO: Waiting up to 5m0s for pod "pod-configmaps-aff78195-d894-40d0-991f-791e5406fbb5" in namespace "configmap-8184" to be "Succeeded or Failed"
Mar 30 07:23:50.536: INFO: Pod "pod-configmaps-aff78195-d894-40d0-991f-791e5406fbb5": Phase="Pending", Reason="", readiness=false. Elapsed: 29.024481ms
Mar 30 07:23:52.565: INFO: Pod "pod-configmaps-aff78195-d894-40d0-991f-791e5406fbb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058389133s
STEP: Saw pod success
Mar 30 07:23:52.565: INFO: Pod "pod-configmaps-aff78195-d894-40d0-991f-791e5406fbb5" satisfied condition "Succeeded or Failed"
Mar 30 07:23:52.595: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-configmaps-aff78195-d894-40d0-991f-791e5406fbb5 container configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 07:23:52.679: INFO: Waiting for pod pod-configmaps-aff78195-d894-40d0-991f-791e5406fbb5 to disappear
Mar 30 07:23:52.708: INFO: Pod pod-configmaps-aff78195-d894-40d0-991f-791e5406fbb5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 07:23:52.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8184" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":219,"skipped":3668,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Mar 30 07:23:59.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4081" for this suite.
STEP: Destroying namespace "webhook-4081-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":283,"completed":220,"skipped":3684,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
Mar 30 07:24:02.609: INFO: Successfully updated pod "annotationupdatecce68f8b-a978-42ac-86e2-9ac63552cedf"
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 30 07:24:06.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1160" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":283,"completed":221,"skipped":3684,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
... skipping 42 lines ...
Mar 30 07:24:13.401: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 30 07:24:13.642: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  test/e2e/framework/framework.go:175
Mar 30 07:24:13.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-5135" for this suite.
•{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":222,"skipped":3691,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 7 lines ...
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 30 07:25:13.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8770" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":283,"completed":223,"skipped":3691,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 21 lines ...
Mar 30 07:25:26.469: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 30 07:25:26.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8963" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":283,"completed":224,"skipped":3691,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 17 lines ...
Mar 30 07:25:26.971: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-5314 /api/v1/namespaces/watch-5314/configmaps/e2e-watch-test-watch-closed 0b91d4e5-4807-4e82-a963-37b406f94abb 24808 0 2020-03-30 07:25:26 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 30 07:25:26.971: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-5314 /api/v1/namespaces/watch-5314/configmaps/e2e-watch-test-watch-closed 0b91d4e5-4807-4e82-a963-37b406f94abb 24809 0 2020-03-30 07:25:26 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 30 07:25:26.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5314" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":283,"completed":225,"skipped":3703,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Mar 30 07:25:31.481: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:31.513: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:31.727: INFO: Unable to read jessie_udp@dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:31.759: INFO: Unable to read jessie_tcp@dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:31.791: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:31.822: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:32.008: INFO: Lookups using dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a failed for: [wheezy_udp@dns-test-service.dns-5452.svc.cluster.local wheezy_tcp@dns-test-service.dns-5452.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local jessie_udp@dns-test-service.dns-5452.svc.cluster.local jessie_tcp@dns-test-service.dns-5452.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local]

Mar 30 07:25:37.039: INFO: Unable to read wheezy_udp@dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:37.070: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:37.100: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:37.131: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:37.341: INFO: Unable to read jessie_udp@dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:37.372: INFO: Unable to read jessie_tcp@dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:37.404: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:37.437: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:37.623: INFO: Lookups using dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a failed for: [wheezy_udp@dns-test-service.dns-5452.svc.cluster.local wheezy_tcp@dns-test-service.dns-5452.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local jessie_udp@dns-test-service.dns-5452.svc.cluster.local jessie_tcp@dns-test-service.dns-5452.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local]

Mar 30 07:25:42.043: INFO: Unable to read wheezy_udp@dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:42.073: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:42.103: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:42.134: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:42.350: INFO: Unable to read jessie_udp@dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:42.382: INFO: Unable to read jessie_tcp@dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:42.412: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:42.443: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:42.628: INFO: Lookups using dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a failed for: [wheezy_udp@dns-test-service.dns-5452.svc.cluster.local wheezy_tcp@dns-test-service.dns-5452.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local jessie_udp@dns-test-service.dns-5452.svc.cluster.local jessie_tcp@dns-test-service.dns-5452.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local]

Mar 30 07:25:47.039: INFO: Unable to read wheezy_udp@dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:47.070: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:47.102: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:47.133: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:47.349: INFO: Unable to read jessie_udp@dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:47.380: INFO: Unable to read jessie_tcp@dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:47.410: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:47.441: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:47.626: INFO: Lookups using dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a failed for: [wheezy_udp@dns-test-service.dns-5452.svc.cluster.local wheezy_tcp@dns-test-service.dns-5452.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local jessie_udp@dns-test-service.dns-5452.svc.cluster.local jessie_tcp@dns-test-service.dns-5452.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local]

Mar 30 07:25:52.039: INFO: Unable to read wheezy_udp@dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:52.069: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:52.099: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:52.129: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:52.345: INFO: Unable to read jessie_udp@dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:52.377: INFO: Unable to read jessie_tcp@dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:52.409: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:52.441: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:52.630: INFO: Lookups using dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a failed for: [wheezy_udp@dns-test-service.dns-5452.svc.cluster.local wheezy_tcp@dns-test-service.dns-5452.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local jessie_udp@dns-test-service.dns-5452.svc.cluster.local jessie_tcp@dns-test-service.dns-5452.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local]

Mar 30 07:25:57.039: INFO: Unable to read wheezy_udp@dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:57.070: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:57.101: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:57.131: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:57.347: INFO: Unable to read jessie_udp@dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:57.379: INFO: Unable to read jessie_tcp@dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:57.410: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:57.441: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local from pod dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a: the server could not find the requested resource (get pods dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a)
Mar 30 07:25:57.628: INFO: Lookups using dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a failed for: [wheezy_udp@dns-test-service.dns-5452.svc.cluster.local wheezy_tcp@dns-test-service.dns-5452.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local jessie_udp@dns-test-service.dns-5452.svc.cluster.local jessie_tcp@dns-test-service.dns-5452.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5452.svc.cluster.local]

Mar 30 07:26:02.628: INFO: DNS probes using dns-5452/dns-test-f8534c2e-85df-4c2c-90a4-1cf108b12b6a succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 30 07:26:02.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5452" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":283,"completed":226,"skipped":3713,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
Mar 30 07:26:07.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9391" for this suite.
STEP: Destroying namespace "webhook-9391-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":283,"completed":227,"skipped":3758,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 77 lines ...
Mar 30 07:26:26.212: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1110/pods","resourceVersion":"25160"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 30 07:26:26.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1110" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":283,"completed":228,"skipped":3776,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 30 07:26:26.559: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-c487a039-6f25-40a0-b1e9-d365714a9a31" in namespace "security-context-test-266" to be "Succeeded or Failed"
Mar 30 07:26:26.589: INFO: Pod "busybox-readonly-false-c487a039-6f25-40a0-b1e9-d365714a9a31": Phase="Pending", Reason="", readiness=false. Elapsed: 30.045419ms
Mar 30 07:26:28.618: INFO: Pod "busybox-readonly-false-c487a039-6f25-40a0-b1e9-d365714a9a31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059553599s
Mar 30 07:26:28.618: INFO: Pod "busybox-readonly-false-c487a039-6f25-40a0-b1e9-d365714a9a31" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 30 07:26:28.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-266" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":283,"completed":229,"skipped":3781,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
W0330 07:26:29.070266   25071 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 30 07:26:29.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2355" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":283,"completed":230,"skipped":3788,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 26 lines ...
Mar 30 07:26:32.265: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 30 07:26:32.265: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 07:26:32.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5035" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":283,"completed":231,"skipped":3819,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 07:26:34.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7608" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":232,"skipped":3836,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 105 lines ...
<a href="btmp">btmp</a>
<a href="ch... (200; 31.255019ms)
[AfterEach] version v1
  test/e2e/framework/framework.go:175
Mar 30 07:26:35.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-367" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":283,"completed":233,"skipped":3866,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 30 07:26:38.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9199" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":283,"completed":234,"skipped":3899,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 16 lines ...
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 30 07:26:55.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8962" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":283,"completed":235,"skipped":3901,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Mar 30 07:26:58.906: INFO: Successfully updated pod "labelsupdatebec13cf7-8a34-4621-8ce7-f5ffd1262da6"
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 30 07:27:03.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1967" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":283,"completed":236,"skipped":3904,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
Mar 30 07:27:05.977: INFO: Unable to read jessie_udp@dns-test-service.dns-1123 from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:06.008: INFO: Unable to read jessie_tcp@dns-test-service.dns-1123 from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:06.039: INFO: Unable to read jessie_udp@dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:06.068: INFO: Unable to read jessie_tcp@dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:06.099: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:06.130: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:06.316: INFO: Lookups using dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1123 wheezy_tcp@dns-test-service.dns-1123 wheezy_udp@dns-test-service.dns-1123.svc wheezy_tcp@dns-test-service.dns-1123.svc wheezy_udp@_http._tcp.dns-test-service.dns-1123.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1123.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1123 jessie_tcp@dns-test-service.dns-1123 jessie_udp@dns-test-service.dns-1123.svc jessie_tcp@dns-test-service.dns-1123.svc jessie_udp@_http._tcp.dns-test-service.dns-1123.svc jessie_tcp@_http._tcp.dns-test-service.dns-1123.svc]

Mar 30 07:27:11.347: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:11.378: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:11.409: INFO: Unable to read wheezy_udp@dns-test-service.dns-1123 from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:11.440: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1123 from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:11.471: INFO: Unable to read wheezy_udp@dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
... skipping 5 lines ...
Mar 30 07:27:11.849: INFO: Unable to read jessie_udp@dns-test-service.dns-1123 from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:11.880: INFO: Unable to read jessie_tcp@dns-test-service.dns-1123 from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:11.911: INFO: Unable to read jessie_udp@dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:11.941: INFO: Unable to read jessie_tcp@dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:11.973: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:12.003: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:12.192: INFO: Lookups using dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1123 wheezy_tcp@dns-test-service.dns-1123 wheezy_udp@dns-test-service.dns-1123.svc wheezy_tcp@dns-test-service.dns-1123.svc wheezy_udp@_http._tcp.dns-test-service.dns-1123.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1123.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1123 jessie_tcp@dns-test-service.dns-1123 jessie_udp@dns-test-service.dns-1123.svc jessie_tcp@dns-test-service.dns-1123.svc jessie_udp@_http._tcp.dns-test-service.dns-1123.svc jessie_tcp@_http._tcp.dns-test-service.dns-1123.svc]

Mar 30 07:27:16.346: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:16.378: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:16.409: INFO: Unable to read wheezy_udp@dns-test-service.dns-1123 from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:16.440: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1123 from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:16.470: INFO: Unable to read wheezy_udp@dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
... skipping 5 lines ...
Mar 30 07:27:16.839: INFO: Unable to read jessie_udp@dns-test-service.dns-1123 from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:16.870: INFO: Unable to read jessie_tcp@dns-test-service.dns-1123 from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:16.901: INFO: Unable to read jessie_udp@dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:16.932: INFO: Unable to read jessie_tcp@dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:16.962: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:16.992: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:17.179: INFO: Lookups using dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1123 wheezy_tcp@dns-test-service.dns-1123 wheezy_udp@dns-test-service.dns-1123.svc wheezy_tcp@dns-test-service.dns-1123.svc wheezy_udp@_http._tcp.dns-test-service.dns-1123.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1123.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1123 jessie_tcp@dns-test-service.dns-1123 jessie_udp@dns-test-service.dns-1123.svc jessie_tcp@dns-test-service.dns-1123.svc jessie_udp@_http._tcp.dns-test-service.dns-1123.svc jessie_tcp@_http._tcp.dns-test-service.dns-1123.svc]

Mar 30 07:27:21.346: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:21.377: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:21.408: INFO: Unable to read wheezy_udp@dns-test-service.dns-1123 from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:21.439: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1123 from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:21.469: INFO: Unable to read wheezy_udp@dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
... skipping 5 lines ...
Mar 30 07:27:21.846: INFO: Unable to read jessie_udp@dns-test-service.dns-1123 from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:21.876: INFO: Unable to read jessie_tcp@dns-test-service.dns-1123 from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:21.907: INFO: Unable to read jessie_udp@dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:21.937: INFO: Unable to read jessie_tcp@dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:21.968: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:22.001: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:22.188: INFO: Lookups using dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1123 wheezy_tcp@dns-test-service.dns-1123 wheezy_udp@dns-test-service.dns-1123.svc wheezy_tcp@dns-test-service.dns-1123.svc wheezy_udp@_http._tcp.dns-test-service.dns-1123.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1123.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1123 jessie_tcp@dns-test-service.dns-1123 jessie_udp@dns-test-service.dns-1123.svc jessie_tcp@dns-test-service.dns-1123.svc jessie_udp@_http._tcp.dns-test-service.dns-1123.svc jessie_tcp@_http._tcp.dns-test-service.dns-1123.svc]

Mar 30 07:27:26.347: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:26.378: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:26.410: INFO: Unable to read wheezy_udp@dns-test-service.dns-1123 from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:26.441: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1123 from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:26.472: INFO: Unable to read wheezy_udp@dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
... skipping 5 lines ...
Mar 30 07:27:26.851: INFO: Unable to read jessie_udp@dns-test-service.dns-1123 from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:26.881: INFO: Unable to read jessie_tcp@dns-test-service.dns-1123 from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:26.913: INFO: Unable to read jessie_udp@dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:26.944: INFO: Unable to read jessie_tcp@dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:26.973: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:27.004: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:27.189: INFO: Lookups using dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1123 wheezy_tcp@dns-test-service.dns-1123 wheezy_udp@dns-test-service.dns-1123.svc wheezy_tcp@dns-test-service.dns-1123.svc wheezy_udp@_http._tcp.dns-test-service.dns-1123.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1123.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1123 jessie_tcp@dns-test-service.dns-1123 jessie_udp@dns-test-service.dns-1123.svc jessie_tcp@dns-test-service.dns-1123.svc jessie_udp@_http._tcp.dns-test-service.dns-1123.svc jessie_tcp@_http._tcp.dns-test-service.dns-1123.svc]

Mar 30 07:27:31.347: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:31.378: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:31.409: INFO: Unable to read wheezy_udp@dns-test-service.dns-1123 from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:31.441: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1123 from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:31.472: INFO: Unable to read wheezy_udp@dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
... skipping 5 lines ...
Mar 30 07:27:31.843: INFO: Unable to read jessie_udp@dns-test-service.dns-1123 from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:31.874: INFO: Unable to read jessie_tcp@dns-test-service.dns-1123 from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:31.905: INFO: Unable to read jessie_udp@dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:31.936: INFO: Unable to read jessie_tcp@dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:31.967: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:31.999: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1123.svc from pod dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01: the server could not find the requested resource (get pods dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01)
Mar 30 07:27:32.181: INFO: Lookups using dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1123 wheezy_tcp@dns-test-service.dns-1123 wheezy_udp@dns-test-service.dns-1123.svc wheezy_tcp@dns-test-service.dns-1123.svc wheezy_udp@_http._tcp.dns-test-service.dns-1123.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1123.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1123 jessie_tcp@dns-test-service.dns-1123 jessie_udp@dns-test-service.dns-1123.svc jessie_tcp@dns-test-service.dns-1123.svc jessie_udp@_http._tcp.dns-test-service.dns-1123.svc jessie_tcp@_http._tcp.dns-test-service.dns-1123.svc]

Mar 30 07:27:37.209: INFO: DNS probes using dns-1123/dns-test-1e507634-1eca-4e36-b0ea-be5042cc2f01 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 30 07:27:37.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1123" for this suite.
•{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":283,"completed":237,"skipped":3912,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 18 lines ...
Mar 30 07:27:55.691: INFO: The status of Pod test-webserver-52083fa0-b6bb-4e47-a690-3ff48e1f4ece is Running (Ready = true)
Mar 30 07:27:55.720: INFO: Container started at 2020-03-30 07:27:38 +0000 UTC, pod became ready at 2020-03-30 07:27:53 +0000 UTC
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 30 07:27:55.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1913" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":283,"completed":238,"skipped":3929,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 30 07:28:00.431: INFO: File wheezy_udp@dns-test-service-3.dns-9452.svc.cluster.local from pod  dns-9452/dns-test-14bb0c9d-c4e6-4309-9f4d-3e29fb85e02e contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 07:28:00.464: INFO: File jessie_udp@dns-test-service-3.dns-9452.svc.cluster.local from pod  dns-9452/dns-test-14bb0c9d-c4e6-4309-9f4d-3e29fb85e02e contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 07:28:00.464: INFO: Lookups using dns-9452/dns-test-14bb0c9d-c4e6-4309-9f4d-3e29fb85e02e failed for: [wheezy_udp@dns-test-service-3.dns-9452.svc.cluster.local jessie_udp@dns-test-service-3.dns-9452.svc.cluster.local]

Mar 30 07:28:05.494: INFO: File wheezy_udp@dns-test-service-3.dns-9452.svc.cluster.local from pod  dns-9452/dns-test-14bb0c9d-c4e6-4309-9f4d-3e29fb85e02e contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 07:28:05.526: INFO: File jessie_udp@dns-test-service-3.dns-9452.svc.cluster.local from pod  dns-9452/dns-test-14bb0c9d-c4e6-4309-9f4d-3e29fb85e02e contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 07:28:05.526: INFO: Lookups using dns-9452/dns-test-14bb0c9d-c4e6-4309-9f4d-3e29fb85e02e failed for: [wheezy_udp@dns-test-service-3.dns-9452.svc.cluster.local jessie_udp@dns-test-service-3.dns-9452.svc.cluster.local]

Mar 30 07:28:10.496: INFO: File wheezy_udp@dns-test-service-3.dns-9452.svc.cluster.local from pod  dns-9452/dns-test-14bb0c9d-c4e6-4309-9f4d-3e29fb85e02e contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 07:28:10.527: INFO: File jessie_udp@dns-test-service-3.dns-9452.svc.cluster.local from pod  dns-9452/dns-test-14bb0c9d-c4e6-4309-9f4d-3e29fb85e02e contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 07:28:10.527: INFO: Lookups using dns-9452/dns-test-14bb0c9d-c4e6-4309-9f4d-3e29fb85e02e failed for: [wheezy_udp@dns-test-service-3.dns-9452.svc.cluster.local jessie_udp@dns-test-service-3.dns-9452.svc.cluster.local]

Mar 30 07:28:15.496: INFO: File wheezy_udp@dns-test-service-3.dns-9452.svc.cluster.local from pod  dns-9452/dns-test-14bb0c9d-c4e6-4309-9f4d-3e29fb85e02e contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 07:28:15.527: INFO: File jessie_udp@dns-test-service-3.dns-9452.svc.cluster.local from pod  dns-9452/dns-test-14bb0c9d-c4e6-4309-9f4d-3e29fb85e02e contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 07:28:15.527: INFO: Lookups using dns-9452/dns-test-14bb0c9d-c4e6-4309-9f4d-3e29fb85e02e failed for: [wheezy_udp@dns-test-service-3.dns-9452.svc.cluster.local jessie_udp@dns-test-service-3.dns-9452.svc.cluster.local]

Mar 30 07:28:20.496: INFO: File wheezy_udp@dns-test-service-3.dns-9452.svc.cluster.local from pod  dns-9452/dns-test-14bb0c9d-c4e6-4309-9f4d-3e29fb85e02e contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 07:28:20.527: INFO: File jessie_udp@dns-test-service-3.dns-9452.svc.cluster.local from pod  dns-9452/dns-test-14bb0c9d-c4e6-4309-9f4d-3e29fb85e02e contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 07:28:20.527: INFO: Lookups using dns-9452/dns-test-14bb0c9d-c4e6-4309-9f4d-3e29fb85e02e failed for: [wheezy_udp@dns-test-service-3.dns-9452.svc.cluster.local jessie_udp@dns-test-service-3.dns-9452.svc.cluster.local]

Mar 30 07:28:25.496: INFO: File wheezy_udp@dns-test-service-3.dns-9452.svc.cluster.local from pod  dns-9452/dns-test-14bb0c9d-c4e6-4309-9f4d-3e29fb85e02e contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 07:28:25.527: INFO: File jessie_udp@dns-test-service-3.dns-9452.svc.cluster.local from pod  dns-9452/dns-test-14bb0c9d-c4e6-4309-9f4d-3e29fb85e02e contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 30 07:28:25.527: INFO: Lookups using dns-9452/dns-test-14bb0c9d-c4e6-4309-9f4d-3e29fb85e02e failed for: [wheezy_udp@dns-test-service-3.dns-9452.svc.cluster.local jessie_udp@dns-test-service-3.dns-9452.svc.cluster.local]

Mar 30 07:28:30.529: INFO: DNS probes using dns-test-14bb0c9d-c4e6-4309-9f4d-3e29fb85e02e succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9452.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9452.svc.cluster.local; sleep 1; done
... skipping 9 lines ...
STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 30 07:28:32.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9452" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":283,"completed":239,"skipped":3944,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 30 07:28:33.197: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cfe17ef6-e73d-4b7e-bd5e-ad6eb81044e3" in namespace "downward-api-3221" to be "Succeeded or Failed"
Mar 30 07:28:33.228: INFO: Pod "downwardapi-volume-cfe17ef6-e73d-4b7e-bd5e-ad6eb81044e3": Phase="Pending", Reason="", readiness=false. Elapsed: 30.491654ms
Mar 30 07:28:35.257: INFO: Pod "downwardapi-volume-cfe17ef6-e73d-4b7e-bd5e-ad6eb81044e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059587102s
STEP: Saw pod success
Mar 30 07:28:35.257: INFO: Pod "downwardapi-volume-cfe17ef6-e73d-4b7e-bd5e-ad6eb81044e3" satisfied condition "Succeeded or Failed"
Mar 30 07:28:35.286: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod downwardapi-volume-cfe17ef6-e73d-4b7e-bd5e-ad6eb81044e3 container client-container: <nil>
STEP: delete the pod
Mar 30 07:28:35.372: INFO: Waiting for pod downwardapi-volume-cfe17ef6-e73d-4b7e-bd5e-ad6eb81044e3 to disappear
Mar 30 07:28:35.401: INFO: Pod downwardapi-volume-cfe17ef6-e73d-4b7e-bd5e-ad6eb81044e3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 30 07:28:35.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3221" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":283,"completed":240,"skipped":3948,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-836aa130-391b-454b-b9ea-4910b76804f7
STEP: Creating a pod to test consume configMaps
Mar 30 07:28:35.690: INFO: Waiting up to 5m0s for pod "pod-configmaps-1558f4f6-5aad-4b26-b335-64cc6663790c" in namespace "configmap-423" to be "Succeeded or Failed"
Mar 30 07:28:35.719: INFO: Pod "pod-configmaps-1558f4f6-5aad-4b26-b335-64cc6663790c": Phase="Pending", Reason="", readiness=false. Elapsed: 29.000308ms
Mar 30 07:28:37.748: INFO: Pod "pod-configmaps-1558f4f6-5aad-4b26-b335-64cc6663790c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058030531s
STEP: Saw pod success
Mar 30 07:28:37.748: INFO: Pod "pod-configmaps-1558f4f6-5aad-4b26-b335-64cc6663790c" satisfied condition "Succeeded or Failed"
Mar 30 07:28:37.778: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod pod-configmaps-1558f4f6-5aad-4b26-b335-64cc6663790c container configmap-volume-test: <nil>
STEP: delete the pod
Mar 30 07:28:37.852: INFO: Waiting for pod pod-configmaps-1558f4f6-5aad-4b26-b335-64cc6663790c to disappear
Mar 30 07:28:37.882: INFO: Pod pod-configmaps-1558f4f6-5aad-4b26-b335-64cc6663790c no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 30 07:28:37.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-423" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":283,"completed":241,"skipped":3959,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should print the output to logs [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 30 07:28:40.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1341" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":283,"completed":242,"skipped":3976,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
S
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 10 lines ...
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Mar 30 07:28:40.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-981" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":283,"completed":243,"skipped":3977,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
Mar 30 07:28:43.573: INFO: Successfully updated pod "labelsupdate9a9bc975-c09c-4437-bd83-fd9b5986e6ad"
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 30 07:28:47.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3912" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":283,"completed":244,"skipped":4013,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 30 07:28:47.886: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 07:28:48.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6810" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":283,"completed":245,"skipped":4028,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 30 07:29:06.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2627" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":283,"completed":246,"skipped":4030,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-3cedc6c2-afac-4c4e-a574-304fcded6f84
STEP: Creating a pod to test consume secrets
Mar 30 07:29:06.500: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-44b0e08e-a340-4e46-9bf1-e0aaa7b6eb23" in namespace "projected-762" to be "Succeeded or Failed"
Mar 30 07:29:06.531: INFO: Pod "pod-projected-secrets-44b0e08e-a340-4e46-9bf1-e0aaa7b6eb23": Phase="Pending", Reason="", readiness=false. Elapsed: 30.358629ms
Mar 30 07:29:08.561: INFO: Pod "pod-projected-secrets-44b0e08e-a340-4e46-9bf1-e0aaa7b6eb23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060196732s
STEP: Saw pod success
Mar 30 07:29:08.561: INFO: Pod "pod-projected-secrets-44b0e08e-a340-4e46-9bf1-e0aaa7b6eb23" satisfied condition "Succeeded or Failed"
Mar 30 07:29:08.593: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod pod-projected-secrets-44b0e08e-a340-4e46-9bf1-e0aaa7b6eb23 container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 30 07:29:08.666: INFO: Waiting for pod pod-projected-secrets-44b0e08e-a340-4e46-9bf1-e0aaa7b6eb23 to disappear
Mar 30 07:29:08.695: INFO: Pod pod-projected-secrets-44b0e08e-a340-4e46-9bf1-e0aaa7b6eb23 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 30 07:29:08.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-762" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":247,"skipped":4033,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 30 07:29:13.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4430" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":283,"completed":248,"skipped":4066,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Mar 30 07:29:13.950: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 30 07:29:16.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7963" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":283,"completed":249,"skipped":4066,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Mar 30 07:29:19.339: INFO: Initial restart count of pod busybox-8171477a-a551-4754-8c8e-8bbb48be9476 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 30 07:33:20.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6197" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":283,"completed":250,"skipped":4066,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 30 07:33:20.978: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 30 07:33:21.139: INFO: Waiting up to 5m0s for pod "downward-api-5f49752c-175b-4576-a5d1-2a32b8f23f5d" in namespace "downward-api-2429" to be "Succeeded or Failed"
Mar 30 07:33:21.170: INFO: Pod "downward-api-5f49752c-175b-4576-a5d1-2a32b8f23f5d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.866553ms
Mar 30 07:33:23.199: INFO: Pod "downward-api-5f49752c-175b-4576-a5d1-2a32b8f23f5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060237873s
STEP: Saw pod success
Mar 30 07:33:23.200: INFO: Pod "downward-api-5f49752c-175b-4576-a5d1-2a32b8f23f5d" satisfied condition "Succeeded or Failed"
Mar 30 07:33:23.229: INFO: Trying to get logs from node test1-md-0-nwt7t.c.k8s-jkns-gci-gce-multizone.internal pod downward-api-5f49752c-175b-4576-a5d1-2a32b8f23f5d container dapi-container: <nil>
STEP: delete the pod
Mar 30 07:33:23.315: INFO: Waiting for pod downward-api-5f49752c-175b-4576-a5d1-2a32b8f23f5d to disappear
Mar 30 07:33:23.344: INFO: Pod downward-api-5f49752c-175b-4576-a5d1-2a32b8f23f5d no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 30 07:33:23.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2429" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":283,"completed":251,"skipped":4078,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Mar 30 07:33:25.686: INFO: Initial restart count of pod test-webserver-57607606-ba06-4977-adc0-97446ff611de is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 30 07:37:27.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3174" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":283,"completed":252,"skipped":4119,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Mar 30 07:37:27.460: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig proxy --unix-socket=/tmp/kubectl-proxy-unix025000589/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 07:37:27.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8053" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":283,"completed":253,"skipped":4136,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] 
  removing taint cancels eviction [Disruptive] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
... skipping 20 lines ...
STEP: Waiting some time to make sure that toleration time passed.
Mar 30 07:39:43.273: INFO: Pod wasn't evicted. Test successful
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
  test/e2e/framework/framework.go:175
Mar 30 07:39:43.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-single-pod-4010" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":283,"completed":254,"skipped":4140,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Mar 30 07:39:43.942: INFO: stderr: ""
Mar 30 07:39:43.942: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://34.107.202.22:443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://34.107.202.22:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 30 07:39:43.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5873" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":283,"completed":255,"skipped":4148,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 2 lines ...
Mar 30 07:39:44.033: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  test/e2e/framework/framework.go:597
Mar 30 07:39:44.159: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
{"component":"entrypoint","file":"prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","time":"2020-03-30T07:39:45Z"}
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Mar 30 07:39:47.919: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig --namespace=crd-publish-openapi-724 create -f -'
Mar 30 07:39:48.664: INFO: stderr: ""
Mar 30 07:39:48.664: INFO: stdout: "e2e-test-crd-publish-openapi-9242-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Mar 30 07:39:48.664: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.202.22:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig --namespace=crd-publish-openapi-724 delete e2e-test-crd-publish-openapi-9242-crds test-cr'
Mar 30 07:39:48.904: INFO: stderr: ""
... skipping 9 lines ...
Mar 30 07:39:50.078: INFO: stderr: ""
Mar 30 07:39:50.078: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9242-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 30 07:39:53.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-724" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":283,"completed":256,"skipped":4149,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Pods Extended
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  test/e2e/framework/framework.go:175
Mar 30 07:39:53.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2129" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":283,"completed":257,"skipped":4157,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Lease
... skipping 5 lines ...
[It] lease API should be available [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Lease
  test/e2e/framework/framework.go:175
Mar 30 07:39:54.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-956" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":283,"completed":258,"skipped":4173,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-ac51a924-b5e4-47e0-bd81-99979f66633b
STEP: Creating a pod to test consume secrets
Mar 30 07:39:54.490: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-21980939-391a-4a9e-bf13-61109cc733c5" in namespace "projected-4995" to be "Succeeded or Failed"
Mar 30 07:39:54.518: INFO: Pod "pod-projected-secrets-21980939-391a-4a9e-bf13-61109cc733c5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.237684ms
Mar 30 07:39:56.547: INFO: Pod "pod-projected-secrets-21980939-391a-4a9e-bf13-61109cc733c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.057181816s
STEP: Saw pod success
Mar 30 07:39:56.547: INFO: Pod "pod-projected-secrets-21980939-391a-4a9e-bf13-61109cc733c5" satisfied condition "Succeeded or Failed"
Mar 30 07:39:56.576: INFO: Trying to get logs from node test1-md-0-w52ss.c.k8s-jkns-gci-gce-multizone.internal pod pod-projected-secrets-21980939-391a-4a9e-bf13-61109cc733c5 container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 30 07:39:56.667: INFO: Waiting for pod pod-projected-secrets-21980939-391a-4a9e-bf13-61109cc733c5 to disappear
Mar 30 07:39:56.697: INFO: Pod pod-projected-secrets-21980939-391a-4a9e-bf13-61109cc733c5 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 30 07:39:56.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4995" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":259,"skipped":4193,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 2 lines ...
Mar 30 07:39:56.789: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  test/e2e/framework/framework.go:597
STEP: create the rc
{"component":"entrypoint","file":"prow/entrypoint/run.go:245","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","time":"2020-03-30T07:40:00Z"}