This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-03-27 20:20
Elapsed2h0m
Revisionrelease-0.2
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/cc7bc1f8-8d64-47d1-8e33-5204bae6f723/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/cc7bc1f8-8d64-47d1-8e33-5204bae6f723/targets/test

No Test Failures!


Error lines from build-log.txt

... skipping 125 lines ...
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
INFO: Invocation ID: 4cdaca7d-771e-4fcc-bf05-69bd1d09c4dc
Loading: 
Loading: 0 packages loaded
Loading: 0 packages loaded
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/bazelbuild/rules_go/releases/download/v0.22.2/rules_go-v0.22.2.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/kubernetes/repo-infra/archive/v0.0.3.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
Loading: 0 packages loaded
Loading: 0 packages loaded
    currently loading: test/e2e ... (3 packages)
Analyzing: 3 targets (3 packages loaded, 0 targets configured)
Analyzing: 3 targets (16 packages loaded, 9 targets configured)
Analyzing: 3 targets (16 packages loaded, 9 targets configured)
... skipping 1699 lines ...
    ubuntu-1804:
    ubuntu-1804: TASK [sysprep : Truncate shell history] ****************************************
    ubuntu-1804: ok: [default] => (item={u'path': u'/root/.bash_history'})
    ubuntu-1804: ok: [default] => (item={u'path': u'/home/ubuntu/.bash_history'})
    ubuntu-1804:
    ubuntu-1804: PLAY RECAP *********************************************************************
    ubuntu-1804: default                    : ok=60   changed=46   unreachable=0    failed=0    skipped=72   rescued=0    ignored=0
    ubuntu-1804:
==> ubuntu-1804: Deleting instance...
    ubuntu-1804: Instance has been deleted!
==> ubuntu-1804: Creating image...
==> ubuntu-1804: Deleting disk...
    ubuntu-1804: Disk has been deleted!
... skipping 421 lines ...
node/test1-controlplane-2.c.k8s-jkns-gci-gce-1-3.internal condition met
node/test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal condition met
node/test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal condition met
Conformance test: not doing test setup.
I0327 20:49:12.842214   24935 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
I0327 20:49:12.843824   24935 e2e.go:124] Starting e2e run "6c5d5875-d713-46d0-9894-db8d4b5313e3" on Ginkgo node 1
{"msg":"Test Suite starting","total":283,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1585342151 - Will randomize all specs
Will run 283 of 4993 specs

Mar 27 20:49:12.863: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 27 20:49:12.874: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Mar 27 20:49:13.032: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Mar 27 20:49:13.185: INFO: The status of Pod calico-node-vbqps is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Mar 27 20:49:13.185: INFO: 21 / 22 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar 27 20:49:13.185: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready.
Mar 27 20:49:13.185: INFO: POD                NODE                                                  PHASE    GRACE  CONDITIONS
Mar 27 20:49:13.185: INFO: calico-node-vbqps  test1-controlplane-1.c.k8s-jkns-gci-gce-1-3.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 20:49:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 20:48:50 +0000 UTC ContainersNotReady containers with unready status: [calico-node]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 20:48:50 +0000 UTC ContainersNotReady containers with unready status: [calico-node]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 20:48:49 +0000 UTC  }]
Mar 27 20:49:13.185: INFO: 
Mar 27 20:49:15.339: INFO: The status of Pod calico-node-vbqps is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Mar 27 20:49:15.339: INFO: 21 / 22 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
Mar 27 20:49:15.339: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready.
Mar 27 20:49:15.339: INFO: POD                NODE                                                  PHASE    GRACE  CONDITIONS
Mar 27 20:49:15.339: INFO: calico-node-vbqps  test1-controlplane-1.c.k8s-jkns-gci-gce-1-3.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 20:49:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 20:48:50 +0000 UTC ContainersNotReady containers with unready status: [calico-node]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 20:48:50 +0000 UTC ContainersNotReady containers with unready status: [calico-node]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 20:48:49 +0000 UTC  }]
Mar 27 20:49:15.339: INFO: 
Mar 27 20:49:17.338: INFO: 22 / 22 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
... skipping 68 lines ...
Mar 27 20:49:36.568: INFO: stderr: ""
Mar 27 20:49:36.568: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 27 20:49:36.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5390" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":283,"completed":1,"skipped":3,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Mar 27 20:49:41.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8986" for this suite.
STEP: Destroying namespace "webhook-8986-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":283,"completed":2,"skipped":8,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 27 20:49:41.697: INFO: Waiting up to 5m0s for pod "downwardapi-volume-163c9c3f-08df-48f0-a0f4-9b73e6ff85bb" in namespace "projected-792" to be "Succeeded or Failed"
Mar 27 20:49:41.727: INFO: Pod "downwardapi-volume-163c9c3f-08df-48f0-a0f4-9b73e6ff85bb": Phase="Pending", Reason="", readiness=false. Elapsed: 30.844193ms
Mar 27 20:49:43.758: INFO: Pod "downwardapi-volume-163c9c3f-08df-48f0-a0f4-9b73e6ff85bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061095958s
Mar 27 20:49:45.787: INFO: Pod "downwardapi-volume-163c9c3f-08df-48f0-a0f4-9b73e6ff85bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090637887s
STEP: Saw pod success
Mar 27 20:49:45.787: INFO: Pod "downwardapi-volume-163c9c3f-08df-48f0-a0f4-9b73e6ff85bb" satisfied condition "Succeeded or Failed"
Mar 27 20:49:45.817: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod downwardapi-volume-163c9c3f-08df-48f0-a0f4-9b73e6ff85bb container client-container: <nil>
STEP: delete the pod
Mar 27 20:49:45.905: INFO: Waiting for pod downwardapi-volume-163c9c3f-08df-48f0-a0f4-9b73e6ff85bb to disappear
Mar 27 20:49:45.936: INFO: Pod downwardapi-volume-163c9c3f-08df-48f0-a0f4-9b73e6ff85bb no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 27 20:49:45.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-792" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":283,"completed":3,"skipped":34,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
Mar 27 20:50:01.325: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 27 20:50:04.771: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 27 20:50:19.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8354" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":283,"completed":4,"skipped":65,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 8 lines ...
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:175
Mar 27 20:50:22.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-4349" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":283,"completed":5,"skipped":70,"failed":0}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 27 20:50:22.405: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d0cee10-5822-4922-ba7e-05cc91ef3a68" in namespace "downward-api-5297" to be "Succeeded or Failed"
Mar 27 20:50:22.439: INFO: Pod "downwardapi-volume-0d0cee10-5822-4922-ba7e-05cc91ef3a68": Phase="Pending", Reason="", readiness=false. Elapsed: 34.147016ms
Mar 27 20:50:24.470: INFO: Pod "downwardapi-volume-0d0cee10-5822-4922-ba7e-05cc91ef3a68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065003495s
Mar 27 20:50:26.500: INFO: Pod "downwardapi-volume-0d0cee10-5822-4922-ba7e-05cc91ef3a68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095494251s
STEP: Saw pod success
Mar 27 20:50:26.500: INFO: Pod "downwardapi-volume-0d0cee10-5822-4922-ba7e-05cc91ef3a68" satisfied condition "Succeeded or Failed"
Mar 27 20:50:26.530: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod downwardapi-volume-0d0cee10-5822-4922-ba7e-05cc91ef3a68 container client-container: <nil>
STEP: delete the pod
Mar 27 20:50:26.639: INFO: Waiting for pod downwardapi-volume-0d0cee10-5822-4922-ba7e-05cc91ef3a68 to disappear
Mar 27 20:50:26.670: INFO: Pod downwardapi-volume-0d0cee10-5822-4922-ba7e-05cc91ef3a68 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 27 20:50:26.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5297" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":283,"completed":6,"skipped":75,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
... skipping 11 lines ...
Mar 27 20:50:33.128: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:175
Mar 27 20:50:33.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2387" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":283,"completed":7,"skipped":100,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected combined
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-projected-all-test-volume-2ace3444-aca0-4384-977b-25e489309445
STEP: Creating secret with name secret-projected-all-test-volume-20ccf2d7-22b4-4bd4-96f9-9055e7226b1c
STEP: Creating a pod to test Check all projections for projected volume plugin
Mar 27 20:50:33.572: INFO: Waiting up to 5m0s for pod "projected-volume-b6977ff5-e819-4ed3-83fc-00f8b4e9f2a1" in namespace "projected-901" to be "Succeeded or Failed"
Mar 27 20:50:33.606: INFO: Pod "projected-volume-b6977ff5-e819-4ed3-83fc-00f8b4e9f2a1": Phase="Pending", Reason="", readiness=false. Elapsed: 33.864526ms
Mar 27 20:50:35.640: INFO: Pod "projected-volume-b6977ff5-e819-4ed3-83fc-00f8b4e9f2a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067944913s
Mar 27 20:50:37.671: INFO: Pod "projected-volume-b6977ff5-e819-4ed3-83fc-00f8b4e9f2a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098951019s
Mar 27 20:50:39.702: INFO: Pod "projected-volume-b6977ff5-e819-4ed3-83fc-00f8b4e9f2a1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130109332s
Mar 27 20:50:41.733: INFO: Pod "projected-volume-b6977ff5-e819-4ed3-83fc-00f8b4e9f2a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.161494151s
STEP: Saw pod success
Mar 27 20:50:41.733: INFO: Pod "projected-volume-b6977ff5-e819-4ed3-83fc-00f8b4e9f2a1" satisfied condition "Succeeded or Failed"
Mar 27 20:50:41.764: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod projected-volume-b6977ff5-e819-4ed3-83fc-00f8b4e9f2a1 container projected-all-volume-test: <nil>
STEP: delete the pod
Mar 27 20:50:41.862: INFO: Waiting for pod projected-volume-b6977ff5-e819-4ed3-83fc-00f8b4e9f2a1 to disappear
Mar 27 20:50:41.894: INFO: Pod projected-volume-b6977ff5-e819-4ed3-83fc-00f8b4e9f2a1 no longer exists
[AfterEach] [sig-storage] Projected combined
  test/e2e/framework/framework.go:175
Mar 27 20:50:41.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-901" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":283,"completed":8,"skipped":139,"failed":0}

------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 29 lines ...
Mar 27 20:51:06.935: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 27 20:51:07.201: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 27 20:51:07.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8563" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":283,"completed":9,"skipped":139,"failed":0}

------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-projected-947h
STEP: Creating a pod to test atomic-volume-subpath
Mar 27 20:51:07.531: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-947h" in namespace "subpath-6562" to be "Succeeded or Failed"
Mar 27 20:51:07.563: INFO: Pod "pod-subpath-test-projected-947h": Phase="Pending", Reason="", readiness=false. Elapsed: 31.835598ms
Mar 27 20:51:09.596: INFO: Pod "pod-subpath-test-projected-947h": Phase="Running", Reason="", readiness=true. Elapsed: 2.064198557s
Mar 27 20:51:11.627: INFO: Pod "pod-subpath-test-projected-947h": Phase="Running", Reason="", readiness=true. Elapsed: 4.096002563s
Mar 27 20:51:13.659: INFO: Pod "pod-subpath-test-projected-947h": Phase="Running", Reason="", readiness=true. Elapsed: 6.127115031s
Mar 27 20:51:15.689: INFO: Pod "pod-subpath-test-projected-947h": Phase="Running", Reason="", readiness=true. Elapsed: 8.157733094s
Mar 27 20:51:17.720: INFO: Pod "pod-subpath-test-projected-947h": Phase="Running", Reason="", readiness=true. Elapsed: 10.188930113s
Mar 27 20:51:19.751: INFO: Pod "pod-subpath-test-projected-947h": Phase="Running", Reason="", readiness=true. Elapsed: 12.219866921s
Mar 27 20:51:21.782: INFO: Pod "pod-subpath-test-projected-947h": Phase="Running", Reason="", readiness=true. Elapsed: 14.250867309s
Mar 27 20:51:23.813: INFO: Pod "pod-subpath-test-projected-947h": Phase="Running", Reason="", readiness=true. Elapsed: 16.281812219s
Mar 27 20:51:25.844: INFO: Pod "pod-subpath-test-projected-947h": Phase="Running", Reason="", readiness=true. Elapsed: 18.312587114s
Mar 27 20:51:27.878: INFO: Pod "pod-subpath-test-projected-947h": Phase="Running", Reason="", readiness=true. Elapsed: 20.346403706s
Mar 27 20:51:29.910: INFO: Pod "pod-subpath-test-projected-947h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.378757122s
STEP: Saw pod success
Mar 27 20:51:29.910: INFO: Pod "pod-subpath-test-projected-947h" satisfied condition "Succeeded or Failed"
Mar 27 20:51:29.941: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-subpath-test-projected-947h container test-container-subpath-projected-947h: <nil>
STEP: delete the pod
Mar 27 20:51:30.019: INFO: Waiting for pod pod-subpath-test-projected-947h to disappear
Mar 27 20:51:30.051: INFO: Pod pod-subpath-test-projected-947h no longer exists
STEP: Deleting pod pod-subpath-test-projected-947h
Mar 27 20:51:30.051: INFO: Deleting pod "pod-subpath-test-projected-947h" in namespace "subpath-6562"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 27 20:51:30.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6562" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":283,"completed":10,"skipped":139,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Mar 27 20:51:33.079: INFO: Successfully updated pod "annotationupdate7fec6199-fbb2-4b54-806f-1c7571f3766c"
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 27 20:51:37.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9233" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":283,"completed":11,"skipped":143,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-726a3ff0-5df6-4bfb-be58-2dd153d12563
STEP: Creating a pod to test consume secrets
Mar 27 20:51:37.488: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f6252ced-5130-4a2f-8def-3d56acf0f12d" in namespace "projected-5667" to be "Succeeded or Failed"
Mar 27 20:51:37.522: INFO: Pod "pod-projected-secrets-f6252ced-5130-4a2f-8def-3d56acf0f12d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.037964ms
Mar 27 20:51:39.552: INFO: Pod "pod-projected-secrets-f6252ced-5130-4a2f-8def-3d56acf0f12d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064734876s
STEP: Saw pod success
Mar 27 20:51:39.552: INFO: Pod "pod-projected-secrets-f6252ced-5130-4a2f-8def-3d56acf0f12d" satisfied condition "Succeeded or Failed"
Mar 27 20:51:39.583: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-projected-secrets-f6252ced-5130-4a2f-8def-3d56acf0f12d container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 27 20:51:39.664: INFO: Waiting for pod pod-projected-secrets-f6252ced-5130-4a2f-8def-3d56acf0f12d to disappear
Mar 27 20:51:39.695: INFO: Pod pod-projected-secrets-f6252ced-5130-4a2f-8def-3d56acf0f12d no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 27 20:51:39.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5667" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":12,"skipped":149,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-7f616b7d-5e0e-4064-a42e-59889c1a6d6d
STEP: Creating a pod to test consume configMaps
Mar 27 20:51:40.001: INFO: Waiting up to 5m0s for pod "pod-configmaps-52474789-3c4e-44bf-876a-ab6892d50d34" in namespace "configmap-2544" to be "Succeeded or Failed"
Mar 27 20:51:40.034: INFO: Pod "pod-configmaps-52474789-3c4e-44bf-876a-ab6892d50d34": Phase="Pending", Reason="", readiness=false. Elapsed: 32.497694ms
Mar 27 20:51:42.065: INFO: Pod "pod-configmaps-52474789-3c4e-44bf-876a-ab6892d50d34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063495246s
STEP: Saw pod success
Mar 27 20:51:42.065: INFO: Pod "pod-configmaps-52474789-3c4e-44bf-876a-ab6892d50d34" satisfied condition "Succeeded or Failed"
Mar 27 20:51:42.096: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-configmaps-52474789-3c4e-44bf-876a-ab6892d50d34 container configmap-volume-test: <nil>
STEP: delete the pod
Mar 27 20:51:42.175: INFO: Waiting for pod pod-configmaps-52474789-3c4e-44bf-876a-ab6892d50d34 to disappear
Mar 27 20:51:42.205: INFO: Pod pod-configmaps-52474789-3c4e-44bf-876a-ab6892d50d34 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 27 20:51:42.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2544" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":13,"skipped":163,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 20 lines ...
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 27 20:52:08.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7309" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":283,"completed":14,"skipped":171,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 9 lines ...
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  test/e2e/framework/framework.go:175
Mar 27 20:52:08.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4048" for this suite.
STEP: Destroying namespace "nspatchtest-e9f646a8-e6d7-4f8d-8941-dc7f84356f8d-1505" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":283,"completed":15,"skipped":177,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Mar 27 20:52:13.574: INFO: stderr: ""
Mar 27 20:52:13.574: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4020-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 27 20:52:17.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3247" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":283,"completed":16,"skipped":217,"failed":0}
SS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Mar 27 20:52:19.397: INFO: Initial restart count of pod busybox-de49f178-7f8d-4e5d-b472-1257c55c5f8c is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 27 20:56:21.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6200" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":283,"completed":17,"skipped":219,"failed":0}
SSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 27 20:56:21.290: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-e574e1ac-99ac-475c-bd29-7f450eacc044" in namespace "security-context-test-2088" to be "Succeeded or Failed"
Mar 27 20:56:21.321: INFO: Pod "busybox-privileged-false-e574e1ac-99ac-475c-bd29-7f450eacc044": Phase="Pending", Reason="", readiness=false. Elapsed: 30.63293ms
Mar 27 20:56:23.350: INFO: Pod "busybox-privileged-false-e574e1ac-99ac-475c-bd29-7f450eacc044": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.05979675s
Mar 27 20:56:23.350: INFO: Pod "busybox-privileged-false-e574e1ac-99ac-475c-bd29-7f450eacc044" satisfied condition "Succeeded or Failed"
Mar 27 20:56:23.398: INFO: Got logs for pod "busybox-privileged-false-e574e1ac-99ac-475c-bd29-7f450eacc044": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 27 20:56:23.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2088" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":18,"skipped":222,"failed":0}
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 23 lines ...
Mar 27 20:56:37.979: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 27 20:56:38.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8947" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":283,"completed":19,"skipped":227,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 27 20:56:38.103: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 27 20:56:38.272: INFO: Waiting up to 5m0s for pod "pod-78e10e73-3e8c-491b-b30d-d7e413cf5167" in namespace "emptydir-9816" to be "Succeeded or Failed"
Mar 27 20:56:38.302: INFO: Pod "pod-78e10e73-3e8c-491b-b30d-d7e413cf5167": Phase="Pending", Reason="", readiness=false. Elapsed: 29.481127ms
Mar 27 20:56:40.332: INFO: Pod "pod-78e10e73-3e8c-491b-b30d-d7e413cf5167": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060231662s
Mar 27 20:56:42.363: INFO: Pod "pod-78e10e73-3e8c-491b-b30d-d7e413cf5167": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090625304s
STEP: Saw pod success
Mar 27 20:56:42.363: INFO: Pod "pod-78e10e73-3e8c-491b-b30d-d7e413cf5167" satisfied condition "Succeeded or Failed"
Mar 27 20:56:42.393: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-78e10e73-3e8c-491b-b30d-d7e413cf5167 container test-container: <nil>
STEP: delete the pod
Mar 27 20:56:42.484: INFO: Waiting for pod pod-78e10e73-3e8c-491b-b30d-d7e413cf5167 to disappear
Mar 27 20:56:42.515: INFO: Pod pod-78e10e73-3e8c-491b-b30d-d7e413cf5167 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 27 20:56:42.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9816" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":20,"skipped":232,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 28 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 27 20:56:50.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6626" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":283,"completed":21,"skipped":287,"failed":0}
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 40 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 27 20:57:04.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3831" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":283,"completed":22,"skipped":292,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-b99d3e5a-d2d8-46e5-b93b-5a40e15f6112
STEP: Creating a pod to test consume configMaps
Mar 27 20:57:04.287: INFO: Waiting up to 5m0s for pod "pod-configmaps-338be737-0d53-4671-9037-d8946ef20b50" in namespace "configmap-3612" to be "Succeeded or Failed"
Mar 27 20:57:04.317: INFO: Pod "pod-configmaps-338be737-0d53-4671-9037-d8946ef20b50": Phase="Pending", Reason="", readiness=false. Elapsed: 30.462337ms
Mar 27 20:57:06.348: INFO: Pod "pod-configmaps-338be737-0d53-4671-9037-d8946ef20b50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061216303s
STEP: Saw pod success
Mar 27 20:57:06.348: INFO: Pod "pod-configmaps-338be737-0d53-4671-9037-d8946ef20b50" satisfied condition "Succeeded or Failed"
Mar 27 20:57:06.378: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-configmaps-338be737-0d53-4671-9037-d8946ef20b50 container configmap-volume-test: <nil>
STEP: delete the pod
Mar 27 20:57:06.452: INFO: Waiting for pod pod-configmaps-338be737-0d53-4671-9037-d8946ef20b50 to disappear
Mar 27 20:57:06.481: INFO: Pod pod-configmaps-338be737-0d53-4671-9037-d8946ef20b50 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 27 20:57:06.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3612" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":283,"completed":23,"skipped":359,"failed":0}
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 24 lines ...
Mar 27 20:57:07.632: INFO: created pod pod-service-account-nomountsa-nomountspec
Mar 27 20:57:07.632: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
Mar 27 20:57:07.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6339" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":283,"completed":24,"skipped":363,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 26 lines ...
Mar 27 20:57:12.739: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Mar 27 20:57:12.739: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe pod agnhost-master-r2vwr --namespace=kubectl-9439'
Mar 27 20:57:13.003: INFO: stderr: ""
Mar 27 20:57:13.003: INFO: stdout: "Name:         agnhost-master-r2vwr\nNamespace:    kubectl-9439\nPriority:     0\nNode:         test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal/10.150.0.6\nStart Time:   Fri, 27 Mar 2020 20:57:08 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  cni.projectcalico.org/podIP: 192.168.0.25/32\nStatus:       Running\nIP:           192.168.0.25\nIPs:\n  IP:           192.168.0.25\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://8d69e6ef004a0b07e26384f1e5fcf058c232d0c117486df34a621612bf5df6bb\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 27 Mar 2020 20:57:10 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qdgpf (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-qdgpf:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-qdgpf\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                                                       Message\n  ----    ------     ----       ----                                                       -------\n  Normal  Scheduled  <unknown>  default-scheduler                                          Successfully assigned kubectl-9439/agnhost-master-r2vwr to test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal\n  Normal  Pulled     2s         kubelet, test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n  Normal  Created    2s         kubelet, test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal  Created container agnhost-master\n  Normal  Started    2s         kubelet, test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal  Started container agnhost-master\n"
Mar 27 20:57:13.003: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe rc agnhost-master --namespace=kubectl-9439'
Mar 27 20:57:13.316: INFO: stderr: ""
Mar 27 20:57:13.316: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-9439\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  5s    replication-controller  Created pod: agnhost-master-r2vwr\n"
Mar 27 20:57:13.316: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe service agnhost-master --namespace=kubectl-9439'
Mar 27 20:57:13.597: INFO: stderr: ""
Mar 27 20:57:13.598: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-9439\nLabels:            app=agnhost\n                   role=master\nAnnotations:       <none>\nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.105.9.95\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         192.168.0.25:6379\nSession Affinity:  None\nEvents:            <none>\n"
Mar 27 20:57:13.654: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe node test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal'
Mar 27 20:57:14.033: INFO: stderr: ""
Mar 27 20:57:14.033: INFO: stdout: "Name:               test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=n1-standard-2\n                    beta.kubernetes.io/os=linux\n                    failure-domain.beta.kubernetes.io/region=us-east4\n                    failure-domain.beta.kubernetes.io/zone=us-east4-a\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    projectcalico.org/IPv4Address: 10.150.0.2/32\n                    projectcalico.org/IPv4IPIPTunnelAddr: 192.168.6.128\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Fri, 27 Mar 2020 20:44:01 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal\n  AcquireTime:     <unset>\n  RenewTime:       Fri, 27 Mar 2020 20:57:11 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Fri, 27 Mar 2020 20:44:27 +0000   Fri, 27 Mar 2020 20:44:27 +0000   CalicoIsUp                   Calico is running on this node\n  MemoryPressure       False   Fri, 27 Mar 2020 20:56:28 +0000   Fri, 27 Mar 2020 20:44:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Fri, 27 Mar 2020 20:56:28 +0000   Fri, 27 Mar 2020 20:44:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Fri, 27 Mar 2020 20:56:28 +0000   Fri, 27 Mar 2020 20:44:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Fri, 27 Mar 2020 20:56:28 +0000   Fri, 27 Mar 2020 20:44:31 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   10.150.0.2\n  ExternalIP:   \n  InternalDNS:  test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal\n  Hostname:     test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal\nCapacity:\n  attachable-volumes-gce-pd:  127\n  cpu:                        2\n  ephemeral-storage:          30308240Ki\n  hugepages-1Gi:              0\n  hugepages-2Mi:              0\n  memory:                     7648908Ki\n  pods:                       110\nAllocatable:\n  attachable-volumes-gce-pd:  127\n  cpu:                        2\n  ephemeral-storage:          27932073938\n  hugepages-1Gi:              0\n  hugepages-2Mi:              0\n  memory:                     7546508Ki\n  pods:                       110\nSystem Info:\n  Machine ID:                 e8ba037ac7a31766b32bb26849c01031\n  System UUID:                e8ba037a-c7a3-1766-b32b-b26849c01031\n  Boot ID:                    f8b3c4df-234d-4ff2-9717-eaecc8403974\n  Kernel Version:             5.0.0-1033-gcp\n  OS Image:                   Ubuntu 18.04.4 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3\n  Kubelet Version:            v1.16.2\n  Kube-Proxy Version:         v1.16.2\nProviderID:                   gce://k8s-jkns-gci-gce-1-3/us-east4-a/test1-controlplane-0\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                                                            ------------  ----------  ---------------  -------------  ---\n  kube-system                 calico-kube-controllers-564b6667d7-dv7t5                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m\n  kube-system                 calico-node-q9dbv                                                               250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m\n  kube-system                 coredns-5644d7b6d9-6dq9k                                                        100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m\n  kube-system                 coredns-5644d7b6d9-bf6vx                                                        100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m\n  kube-system                 etcd-test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m\n  kube-system                 kube-apiserver-test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m\n  kube-system                 kube-controller-manager-test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m\n  kube-system                 kube-proxy-n4q77                                                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m\n  kube-system                 kube-scheduler-test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                   Requests    Limits\n  --------                   --------    ------\n  cpu                        1 (50%)     0 (0%)\n  memory                     140Mi (1%)  340Mi (4%)\n  ephemeral-storage          0 (0%)      0 (0%)\n  hugepages-1Gi              0 (0%)      0 (0%)\n  hugepages-2Mi              0 (0%)      0 (0%)\n  attachable-volumes-gce-pd  0           0\nEvents:\n  Type     Reason                   Age                From                                                              Message\n  ----     ------                   ----               ----                                                              -------\n  Normal   Starting                 15m                kubelet, test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal     Starting kubelet.\n  Warning  InvalidDiskCapacity      15m                kubelet, test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal     invalid capacity 0 on image filesystem\n  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet, test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal     Node test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure    15m (x7 over 15m)  kubelet, test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal     Node test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet, test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal     Node test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal status is now: NodeHasSufficientPID\n  Normal   NodeAllocatableEnforced  15m                kubelet, test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal     Updated Node Allocatable limit across pods\n  Normal   Starting                 13m                kube-proxy, test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal  Starting kube-proxy.\n"
Mar 27 20:57:14.033: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe namespace kubectl-9439'
Mar 27 20:57:14.323: INFO: stderr: ""
Mar 27 20:57:14.323: INFO: stdout: "Name:         kubectl-9439\nLabels:       e2e-framework=kubectl\n              e2e-run=6c5d5875-d713-46d0-9894-db8d4b5313e3\nAnnotations:  <none>\nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 27 20:57:14.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9439" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":283,"completed":25,"skipped":373,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 27 20:57:14.438: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 27 20:57:14.635: INFO: Waiting up to 5m0s for pod "pod-6a92491c-91bd-4c60-9d84-80313197299f" in namespace "emptydir-1086" to be "Succeeded or Failed"
Mar 27 20:57:14.669: INFO: Pod "pod-6a92491c-91bd-4c60-9d84-80313197299f": Phase="Pending", Reason="", readiness=false. Elapsed: 34.29103ms
Mar 27 20:57:16.699: INFO: Pod "pod-6a92491c-91bd-4c60-9d84-80313197299f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064570493s
Mar 27 20:57:18.730: INFO: Pod "pod-6a92491c-91bd-4c60-9d84-80313197299f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094837777s
STEP: Saw pod success
Mar 27 20:57:18.730: INFO: Pod "pod-6a92491c-91bd-4c60-9d84-80313197299f" satisfied condition "Succeeded or Failed"
Mar 27 20:57:18.759: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-6a92491c-91bd-4c60-9d84-80313197299f container test-container: <nil>
STEP: delete the pod
Mar 27 20:57:18.836: INFO: Waiting for pod pod-6a92491c-91bd-4c60-9d84-80313197299f to disappear
Mar 27 20:57:18.873: INFO: Pod pod-6a92491c-91bd-4c60-9d84-80313197299f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 27 20:57:18.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1086" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":26,"skipped":374,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Mar 27 20:57:19.094: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 27 20:57:19.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6266" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":283,"completed":27,"skipped":385,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 27 20:57:25.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3733" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":283,"completed":28,"skipped":404,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-ec8258d1-f8d8-4095-91c2-c29eba8ed786
STEP: Creating a pod to test consume secrets
Mar 27 20:57:25.986: INFO: Waiting up to 5m0s for pod "pod-secrets-c0877363-7897-4ebc-9cee-2597103ae402" in namespace "secrets-1364" to be "Succeeded or Failed"
Mar 27 20:57:26.016: INFO: Pod "pod-secrets-c0877363-7897-4ebc-9cee-2597103ae402": Phase="Pending", Reason="", readiness=false. Elapsed: 29.814729ms
Mar 27 20:57:28.045: INFO: Pod "pod-secrets-c0877363-7897-4ebc-9cee-2597103ae402": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059474691s
STEP: Saw pod success
Mar 27 20:57:28.045: INFO: Pod "pod-secrets-c0877363-7897-4ebc-9cee-2597103ae402" satisfied condition "Succeeded or Failed"
Mar 27 20:57:28.075: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-secrets-c0877363-7897-4ebc-9cee-2597103ae402 container secret-volume-test: <nil>
STEP: delete the pod
Mar 27 20:57:28.151: INFO: Waiting for pod pod-secrets-c0877363-7897-4ebc-9cee-2597103ae402 to disappear
Mar 27 20:57:28.181: INFO: Pod pod-secrets-c0877363-7897-4ebc-9cee-2597103ae402 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 27 20:57:28.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1364" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":29,"skipped":413,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 28 lines ...
Mar 27 20:57:51.260: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 27 20:57:51.534: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 27 20:57:51.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4447" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":30,"skipped":443,"failed":0}
S
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 27 20:58:02.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4359" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":283,"completed":31,"skipped":444,"failed":0}

------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-edc41e4f-b796-40c4-96d2-f93b9d04a629
STEP: Creating a pod to test consume configMaps
Mar 27 20:58:02.566: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8659472c-9b5c-4c87-a6f7-ebfdb956dc6d" in namespace "projected-8600" to be "Succeeded or Failed"
Mar 27 20:58:02.601: INFO: Pod "pod-projected-configmaps-8659472c-9b5c-4c87-a6f7-ebfdb956dc6d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.584209ms
Mar 27 20:58:04.633: INFO: Pod "pod-projected-configmaps-8659472c-9b5c-4c87-a6f7-ebfdb956dc6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.066687211s
STEP: Saw pod success
Mar 27 20:58:04.633: INFO: Pod "pod-projected-configmaps-8659472c-9b5c-4c87-a6f7-ebfdb956dc6d" satisfied condition "Succeeded or Failed"
Mar 27 20:58:04.662: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-projected-configmaps-8659472c-9b5c-4c87-a6f7-ebfdb956dc6d container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 27 20:58:04.739: INFO: Waiting for pod pod-projected-configmaps-8659472c-9b5c-4c87-a6f7-ebfdb956dc6d to disappear
Mar 27 20:58:04.769: INFO: Pod pod-projected-configmaps-8659472c-9b5c-4c87-a6f7-ebfdb956dc6d no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 27 20:58:04.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8600" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":32,"skipped":444,"failed":0}

------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 27 20:58:04.858: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar 27 20:58:05.024: INFO: Waiting up to 5m0s for pod "pod-de3cdc89-99f5-4e9b-b2b6-60a627b6af24" in namespace "emptydir-6431" to be "Succeeded or Failed"
Mar 27 20:58:05.055: INFO: Pod "pod-de3cdc89-99f5-4e9b-b2b6-60a627b6af24": Phase="Pending", Reason="", readiness=false. Elapsed: 31.351423ms
Mar 27 20:58:07.085: INFO: Pod "pod-de3cdc89-99f5-4e9b-b2b6-60a627b6af24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060689419s
STEP: Saw pod success
Mar 27 20:58:07.085: INFO: Pod "pod-de3cdc89-99f5-4e9b-b2b6-60a627b6af24" satisfied condition "Succeeded or Failed"
Mar 27 20:58:07.114: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-de3cdc89-99f5-4e9b-b2b6-60a627b6af24 container test-container: <nil>
STEP: delete the pod
Mar 27 20:58:07.190: INFO: Waiting for pod pod-de3cdc89-99f5-4e9b-b2b6-60a627b6af24 to disappear
Mar 27 20:58:07.221: INFO: Pod pod-de3cdc89-99f5-4e9b-b2b6-60a627b6af24 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 27 20:58:07.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6431" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":33,"skipped":444,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 85 lines ...
Mar 27 20:59:11.975: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar 27 20:59:11.975: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 27 20:59:11.975: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 27 20:59:11.975: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 20:59:12.412: INFO: rc: 1
Mar 27 20:59:12.412: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "3461a23c99d3bdbeebae259aab3eff8565c364b19061513f00524fc5e7fc8b12": cannot exec in a stopped state: unknown

error:
exit status 1
Mar 27 20:59:22.413: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 20:59:22.636: INFO: rc: 1
Mar 27 20:59:22.636: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 20:59:32.637: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 20:59:32.860: INFO: rc: 1
Mar 27 20:59:32.860: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 20:59:42.860: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 20:59:43.084: INFO: rc: 1
Mar 27 20:59:43.084: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 20:59:53.084: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 20:59:53.319: INFO: rc: 1
Mar 27 20:59:53.319: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:00:03.319: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:00:03.554: INFO: rc: 1
Mar 27 21:00:03.554: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:00:13.554: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:00:13.781: INFO: rc: 1
Mar 27 21:00:13.781: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:00:23.782: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:00:24.012: INFO: rc: 1
Mar 27 21:00:24.013: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:00:34.013: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:00:34.243: INFO: rc: 1
Mar 27 21:00:34.243: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:00:44.243: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:00:44.467: INFO: rc: 1
Mar 27 21:00:44.467: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:00:54.467: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:00:54.692: INFO: rc: 1
Mar 27 21:00:54.692: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:01:04.692: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:01:04.916: INFO: rc: 1
Mar 27 21:01:04.916: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:01:14.916: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:01:15.144: INFO: rc: 1
Mar 27 21:01:15.144: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:01:25.145: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:01:25.370: INFO: rc: 1
Mar 27 21:01:25.370: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:01:35.370: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:01:35.594: INFO: rc: 1
Mar 27 21:01:35.594: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:01:45.594: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:01:45.817: INFO: rc: 1
Mar 27 21:01:45.817: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:01:55.817: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:01:56.064: INFO: rc: 1
Mar 27 21:01:56.064: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:02:06.064: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:02:06.288: INFO: rc: 1
Mar 27 21:02:06.288: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:02:16.288: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:02:16.802: INFO: rc: 1
Mar 27 21:02:16.802: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:02:26.803: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:02:27.035: INFO: rc: 1
Mar 27 21:02:27.035: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:02:37.035: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:02:37.262: INFO: rc: 1
Mar 27 21:02:37.262: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:02:47.263: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:02:47.495: INFO: rc: 1
Mar 27 21:02:47.495: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:02:57.495: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:02:57.721: INFO: rc: 1
Mar 27 21:02:57.721: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:03:07.721: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:03:08.032: INFO: rc: 1
Mar 27 21:03:08.032: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:03:18.032: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:03:18.265: INFO: rc: 1
Mar 27 21:03:18.265: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:03:28.265: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:03:28.493: INFO: rc: 1
Mar 27 21:03:28.493: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:03:38.493: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:03:38.717: INFO: rc: 1
Mar 27 21:03:38.717: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:03:48.717: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:03:48.954: INFO: rc: 1
Mar 27 21:03:48.955: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:03:58.955: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:03:59.183: INFO: rc: 1
Mar 27 21:03:59.183: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:04:09.183: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:04:09.416: INFO: rc: 1
Mar 27 21:04:09.416: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 21:04:19.416: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-1242 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 21:04:19.646: INFO: rc: 1
Mar 27 21:04:19.646: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Mar 27 21:04:19.646: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
... skipping 13 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:592
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":283,"completed":34,"skipped":457,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 27 21:04:20.250: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f5e2349-9eca-497f-ab05-14112d67f658" in namespace "projected-1064" to be "Succeeded or Failed"
Mar 27 21:04:20.282: INFO: Pod "downwardapi-volume-4f5e2349-9eca-497f-ab05-14112d67f658": Phase="Pending", Reason="", readiness=false. Elapsed: 31.54214ms
Mar 27 21:04:22.312: INFO: Pod "downwardapi-volume-4f5e2349-9eca-497f-ab05-14112d67f658": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061675081s
STEP: Saw pod success
Mar 27 21:04:22.312: INFO: Pod "downwardapi-volume-4f5e2349-9eca-497f-ab05-14112d67f658" satisfied condition "Succeeded or Failed"
Mar 27 21:04:22.341: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod downwardapi-volume-4f5e2349-9eca-497f-ab05-14112d67f658 container client-container: <nil>
STEP: delete the pod
Mar 27 21:04:22.444: INFO: Waiting for pod downwardapi-volume-4f5e2349-9eca-497f-ab05-14112d67f658 to disappear
Mar 27 21:04:22.474: INFO: Pod downwardapi-volume-4f5e2349-9eca-497f-ab05-14112d67f658 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 27 21:04:22.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1064" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":283,"completed":35,"skipped":461,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 343 lines ...
Mar 27 21:04:33.952: INFO: Deleting ReplicationController proxy-service-2srmc took: 35.752511ms
Mar 27 21:04:34.052: INFO: Terminating ReplicationController proxy-service-2srmc pods took: 100.254478ms
[AfterEach] version v1
  test/e2e/framework/framework.go:175
Mar 27 21:04:46.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6315" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":283,"completed":36,"skipped":476,"failed":0}
S
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 105 lines ...
<a href="btmp">btmp</a>
<a href="ch... (200; 30.496326ms)
[AfterEach] version v1
  test/e2e/framework/framework.go:175
Mar 27 21:04:47.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1372" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":283,"completed":37,"skipped":477,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 27 21:04:47.662: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2fca0561-81cf-4126-972a-093acf838706" in namespace "projected-2254" to be "Succeeded or Failed"
Mar 27 21:04:47.691: INFO: Pod "downwardapi-volume-2fca0561-81cf-4126-972a-093acf838706": Phase="Pending", Reason="", readiness=false. Elapsed: 29.079833ms
Mar 27 21:04:49.721: INFO: Pod "downwardapi-volume-2fca0561-81cf-4126-972a-093acf838706": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058251847s
STEP: Saw pod success
Mar 27 21:04:49.721: INFO: Pod "downwardapi-volume-2fca0561-81cf-4126-972a-093acf838706" satisfied condition "Succeeded or Failed"
Mar 27 21:04:49.750: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod downwardapi-volume-2fca0561-81cf-4126-972a-093acf838706 container client-container: <nil>
STEP: delete the pod
Mar 27 21:04:49.839: INFO: Waiting for pod downwardapi-volume-2fca0561-81cf-4126-972a-093acf838706 to disappear
Mar 27 21:04:49.868: INFO: Pod downwardapi-volume-2fca0561-81cf-4126-972a-093acf838706 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 27 21:04:49.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2254" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":38,"skipped":492,"failed":0}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should print the output to logs [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 27 21:04:52.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4361" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":283,"completed":39,"skipped":496,"failed":0}
SSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-3551/configmap-test-daab00ff-2e89-4227-93c7-b1bfd41cb2c3
STEP: Creating a pod to test consume configMaps
Mar 27 21:04:52.536: INFO: Waiting up to 5m0s for pod "pod-configmaps-09d10447-5513-49ff-ae1d-051f705d57bd" in namespace "configmap-3551" to be "Succeeded or Failed"
Mar 27 21:04:52.565: INFO: Pod "pod-configmaps-09d10447-5513-49ff-ae1d-051f705d57bd": Phase="Pending", Reason="", readiness=false. Elapsed: 29.812225ms
Mar 27 21:04:54.595: INFO: Pod "pod-configmaps-09d10447-5513-49ff-ae1d-051f705d57bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058979073s
STEP: Saw pod success
Mar 27 21:04:54.595: INFO: Pod "pod-configmaps-09d10447-5513-49ff-ae1d-051f705d57bd" satisfied condition "Succeeded or Failed"
Mar 27 21:04:54.623: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-configmaps-09d10447-5513-49ff-ae1d-051f705d57bd container env-test: <nil>
STEP: delete the pod
Mar 27 21:04:54.697: INFO: Waiting for pod pod-configmaps-09d10447-5513-49ff-ae1d-051f705d57bd to disappear
Mar 27 21:04:54.727: INFO: Pod pod-configmaps-09d10447-5513-49ff-ae1d-051f705d57bd no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Mar 27 21:04:54.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3551" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":283,"completed":40,"skipped":500,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 27 21:04:54.817: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar 27 21:04:55.004: INFO: Waiting up to 5m0s for pod "pod-72382e85-e0f6-4af4-b4a4-6de6fee18fbf" in namespace "emptydir-9166" to be "Succeeded or Failed"
Mar 27 21:04:55.035: INFO: Pod "pod-72382e85-e0f6-4af4-b4a4-6de6fee18fbf": Phase="Pending", Reason="", readiness=false. Elapsed: 31.473612ms
Mar 27 21:04:57.065: INFO: Pod "pod-72382e85-e0f6-4af4-b4a4-6de6fee18fbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060895527s
STEP: Saw pod success
Mar 27 21:04:57.065: INFO: Pod "pod-72382e85-e0f6-4af4-b4a4-6de6fee18fbf" satisfied condition "Succeeded or Failed"
Mar 27 21:04:57.094: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-72382e85-e0f6-4af4-b4a4-6de6fee18fbf container test-container: <nil>
STEP: delete the pod
Mar 27 21:04:57.169: INFO: Waiting for pod pod-72382e85-e0f6-4af4-b4a4-6de6fee18fbf to disappear
Mar 27 21:04:57.199: INFO: Pod pod-72382e85-e0f6-4af4-b4a4-6de6fee18fbf no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 27 21:04:57.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9166" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":41,"skipped":514,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 16 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 27 21:05:10.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2738" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":283,"completed":42,"skipped":517,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-883aeb3d-fbfe-49a4-94b9-11bd6ee9c9e0
STEP: Creating a pod to test consume configMaps
Mar 27 21:05:11.062: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c449a451-2552-4537-86f6-92238b8a27fd" in namespace "projected-9753" to be "Succeeded or Failed"
Mar 27 21:05:11.091: INFO: Pod "pod-projected-configmaps-c449a451-2552-4537-86f6-92238b8a27fd": Phase="Pending", Reason="", readiness=false. Elapsed: 29.293127ms
Mar 27 21:05:13.120: INFO: Pod "pod-projected-configmaps-c449a451-2552-4537-86f6-92238b8a27fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058267923s
STEP: Saw pod success
Mar 27 21:05:13.120: INFO: Pod "pod-projected-configmaps-c449a451-2552-4537-86f6-92238b8a27fd" satisfied condition "Succeeded or Failed"
Mar 27 21:05:13.149: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-projected-configmaps-c449a451-2552-4537-86f6-92238b8a27fd container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 27 21:05:13.224: INFO: Waiting for pod pod-projected-configmaps-c449a451-2552-4537-86f6-92238b8a27fd to disappear
Mar 27 21:05:13.254: INFO: Pod pod-projected-configmaps-c449a451-2552-4537-86f6-92238b8a27fd no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 27 21:05:13.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9753" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":283,"completed":43,"skipped":523,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-72298fad-87ed-4cd9-8987-24c11be422f2
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 27 21:06:47.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2611" for this suite.
•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":44,"skipped":576,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 17 lines ...
Mar 27 21:09:11.994: INFO: Restart count of pod container-probe-1075/liveness-7b46bfe5-7637-4fca-9163-1ec52ab077df is now 5 (2m22.106284006s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 27 21:09:12.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1075" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":283,"completed":45,"skipped":583,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 27 lines ...
  test/e2e/framework/framework.go:175
Mar 27 21:09:18.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7808" for this suite.
STEP: Destroying namespace "webhook-7808-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":283,"completed":46,"skipped":642,"failed":0}
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 38 lines ...
• [SLOW TEST:307.206 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":283,"completed":47,"skipped":645,"failed":0}
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Mar 27 21:14:26.293: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 27 21:14:29.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3779" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":283,"completed":48,"skipped":651,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-f5371f46-bce0-40ce-93c6-c44a308790ab
STEP: Creating a pod to test consume configMaps
Mar 27 21:14:29.958: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-67e000b9-5a0f-4efd-9829-ce5e0f6a0c8c" in namespace "projected-4718" to be "Succeeded or Failed"
Mar 27 21:14:29.992: INFO: Pod "pod-projected-configmaps-67e000b9-5a0f-4efd-9829-ce5e0f6a0c8c": Phase="Pending", Reason="", readiness=false. Elapsed: 34.265618ms
Mar 27 21:14:32.022: INFO: Pod "pod-projected-configmaps-67e000b9-5a0f-4efd-9829-ce5e0f6a0c8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064388613s
STEP: Saw pod success
Mar 27 21:14:32.022: INFO: Pod "pod-projected-configmaps-67e000b9-5a0f-4efd-9829-ce5e0f6a0c8c" satisfied condition "Succeeded or Failed"
Mar 27 21:14:32.052: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-projected-configmaps-67e000b9-5a0f-4efd-9829-ce5e0f6a0c8c container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 27 21:14:32.153: INFO: Waiting for pod pod-projected-configmaps-67e000b9-5a0f-4efd-9829-ce5e0f6a0c8c to disappear
Mar 27 21:14:32.182: INFO: Pod pod-projected-configmaps-67e000b9-5a0f-4efd-9829-ce5e0f6a0c8c no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 27 21:14:32.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4718" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":283,"completed":49,"skipped":660,"failed":0}

------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 27 21:14:32.278: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap that has name configmap-test-emptyKey-1b344d9b-8bf2-47cc-824b-6cb42dfcf238
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Mar 27 21:14:32.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8897" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":283,"completed":50,"skipped":660,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-b0bc744f-6f33-42fe-a6ab-228f28569c70
STEP: Creating a pod to test consume secrets
Mar 27 21:14:32.703: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b6977b70-74a0-4c4d-8556-a539568b8a62" in namespace "projected-881" to be "Succeeded or Failed"
Mar 27 21:14:32.734: INFO: Pod "pod-projected-secrets-b6977b70-74a0-4c4d-8556-a539568b8a62": Phase="Pending", Reason="", readiness=false. Elapsed: 30.525155ms
Mar 27 21:14:34.763: INFO: Pod "pod-projected-secrets-b6977b70-74a0-4c4d-8556-a539568b8a62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060048398s
STEP: Saw pod success
Mar 27 21:14:34.763: INFO: Pod "pod-projected-secrets-b6977b70-74a0-4c4d-8556-a539568b8a62" satisfied condition "Succeeded or Failed"
Mar 27 21:14:34.795: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-projected-secrets-b6977b70-74a0-4c4d-8556-a539568b8a62 container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 27 21:14:34.871: INFO: Waiting for pod pod-projected-secrets-b6977b70-74a0-4c4d-8556-a539568b8a62 to disappear
Mar 27 21:14:34.900: INFO: Pod pod-projected-secrets-b6977b70-74a0-4c4d-8556-a539568b8a62 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 27 21:14:34.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-881" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":51,"skipped":687,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-c0bd0b62-758f-484b-8464-da433a081f6c
STEP: Creating a pod to test consume secrets
Mar 27 21:14:35.195: INFO: Waiting up to 5m0s for pod "pod-secrets-0678de31-0ddc-4b94-962b-5ef0887d5d5e" in namespace "secrets-2874" to be "Succeeded or Failed"
Mar 27 21:14:35.227: INFO: Pod "pod-secrets-0678de31-0ddc-4b94-962b-5ef0887d5d5e": Phase="Pending", Reason="", readiness=false. Elapsed: 31.1804ms
Mar 27 21:14:37.257: INFO: Pod "pod-secrets-0678de31-0ddc-4b94-962b-5ef0887d5d5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061205806s
STEP: Saw pod success
Mar 27 21:14:37.257: INFO: Pod "pod-secrets-0678de31-0ddc-4b94-962b-5ef0887d5d5e" satisfied condition "Succeeded or Failed"
Mar 27 21:14:37.285: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-secrets-0678de31-0ddc-4b94-962b-5ef0887d5d5e container secret-volume-test: <nil>
STEP: delete the pod
Mar 27 21:14:37.366: INFO: Waiting for pod pod-secrets-0678de31-0ddc-4b94-962b-5ef0887d5d5e to disappear
Mar 27 21:14:37.395: INFO: Pod pod-secrets-0678de31-0ddc-4b94-962b-5ef0887d5d5e no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 27 21:14:37.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2874" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":52,"skipped":698,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 36 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
W0327 21:14:38.516322   24935 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 27 21:14:38.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3688" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":283,"completed":53,"skipped":703,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 27 21:14:38.588: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
Mar 27 21:16:38.843: INFO: Deleting pod "var-expansion-a26dc854-8a4a-4fd4-b128-017df3d46aac" in namespace "var-expansion-2913"
Mar 27 21:16:38.878: INFO: Wait up to 5m0s for pod "var-expansion-a26dc854-8a4a-4fd4-b128-017df3d46aac" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 27 21:16:40.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2913" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":283,"completed":54,"skipped":729,"failed":0}

------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 34 lines ...
Mar 27 21:16:46.703: INFO: stdout: "service/rm3 exposed\n"
Mar 27 21:16:46.732: INFO: Service rm3 in namespace kubectl-6089 found.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 27 21:16:48.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6089" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":283,"completed":55,"skipped":729,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 22 lines ...
Mar 27 21:16:59.406: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8733 /api/v1/namespaces/watch-8733/configmaps/e2e-watch-test-label-changed 11c7d1fd-74e5-4cec-8b1e-a15e2c4d02a7 7293 0 2020-03-27 21:16:49 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 27 21:16:59.406: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8733 /api/v1/namespaces/watch-8733/configmaps/e2e-watch-test-label-changed 11c7d1fd-74e5-4cec-8b1e-a15e2c4d02a7 7296 0 2020-03-27 21:16:49 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 27 21:16:59.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8733" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":283,"completed":56,"skipped":760,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 27 21:16:59.494: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename webhook
... skipping 6 lines ...
STEP: Wait for the deployment to be ready
Mar 27 21:17:00.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720940620, loc:(*time.Location)(0x7b56f40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720940620, loc:(*time.Location)(0x7b56f40)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720940620, loc:(*time.Location)(0x7b56f40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720940620, loc:(*time.Location)(0x7b56f40)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 27 21:17:02.529: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720940620, loc:(*time.Location)(0x7b56f40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720940620, loc:(*time.Location)(0x7b56f40)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720940620, loc:(*time.Location)(0x7b56f40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720940620, loc:(*time.Location)(0x7b56f40)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 27 21:17:05.576: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 27 21:17:05.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5798" for this suite.
STEP: Destroying namespace "webhook-5798-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":283,"completed":57,"skipped":767,"failed":0}

------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-0ed4b18b-50e2-4505-9503-9ae62145496a
STEP: Creating a pod to test consume secrets
Mar 27 21:17:06.367: INFO: Waiting up to 5m0s for pod "pod-secrets-6e88156e-5c9a-4202-a044-f5ac4e241bb3" in namespace "secrets-9011" to be "Succeeded or Failed"
Mar 27 21:17:06.396: INFO: Pod "pod-secrets-6e88156e-5c9a-4202-a044-f5ac4e241bb3": Phase="Pending", Reason="", readiness=false. Elapsed: 29.177275ms
Mar 27 21:17:08.425: INFO: Pod "pod-secrets-6e88156e-5c9a-4202-a044-f5ac4e241bb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058447688s
STEP: Saw pod success
Mar 27 21:17:08.425: INFO: Pod "pod-secrets-6e88156e-5c9a-4202-a044-f5ac4e241bb3" satisfied condition "Succeeded or Failed"
Mar 27 21:17:08.454: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-secrets-6e88156e-5c9a-4202-a044-f5ac4e241bb3 container secret-volume-test: <nil>
STEP: delete the pod
Mar 27 21:17:08.540: INFO: Waiting for pod pod-secrets-6e88156e-5c9a-4202-a044-f5ac4e241bb3 to disappear
Mar 27 21:17:08.570: INFO: Pod pod-secrets-6e88156e-5c9a-4202-a044-f5ac4e241bb3 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 27 21:17:08.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9011" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":58,"skipped":767,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 27 21:17:11.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5568" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":283,"completed":59,"skipped":788,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-dde4def0-5ac7-4466-93da-9c23f457834c
STEP: Creating a pod to test consume secrets
Mar 27 21:17:11.361: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b23e3d17-286c-4df1-9eb6-06d9f0b162ef" in namespace "projected-7300" to be "Succeeded or Failed"
Mar 27 21:17:11.392: INFO: Pod "pod-projected-secrets-b23e3d17-286c-4df1-9eb6-06d9f0b162ef": Phase="Pending", Reason="", readiness=false. Elapsed: 31.576688ms
Mar 27 21:17:13.421: INFO: Pod "pod-projected-secrets-b23e3d17-286c-4df1-9eb6-06d9f0b162ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060687487s
STEP: Saw pod success
Mar 27 21:17:13.421: INFO: Pod "pod-projected-secrets-b23e3d17-286c-4df1-9eb6-06d9f0b162ef" satisfied condition "Succeeded or Failed"
Mar 27 21:17:13.451: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-projected-secrets-b23e3d17-286c-4df1-9eb6-06d9f0b162ef container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 27 21:17:13.536: INFO: Waiting for pod pod-projected-secrets-b23e3d17-286c-4df1-9eb6-06d9f0b162ef to disappear
Mar 27 21:17:13.566: INFO: Pod pod-projected-secrets-b23e3d17-286c-4df1-9eb6-06d9f0b162ef no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 27 21:17:13.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7300" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":60,"skipped":797,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 43 lines ...
Mar 27 21:18:45.175: INFO: Waiting for statefulset status.replicas updated to 0
Mar 27 21:18:45.205: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 27 21:18:45.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9423" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":283,"completed":61,"skipped":833,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Mar 27 21:18:45.534: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig proxy --unix-socket=/tmp/kubectl-proxy-unix621345193/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 27 21:18:45.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6480" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":283,"completed":62,"skipped":886,"failed":0}
SS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 26 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 27 21:18:53.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6201" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":283,"completed":63,"skipped":888,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
Mar 27 21:18:59.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1070" for this suite.
STEP: Destroying namespace "webhook-1070-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":283,"completed":64,"skipped":903,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
Mar 27 21:19:08.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8913" for this suite.
STEP: Destroying namespace "webhook-8913-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":283,"completed":65,"skipped":912,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 27 21:19:08.476: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 27 21:19:08.638: INFO: Waiting up to 5m0s for pod "pod-338695c4-448b-45a9-880d-5d223cd09c3d" in namespace "emptydir-9905" to be "Succeeded or Failed"
Mar 27 21:19:08.667: INFO: Pod "pod-338695c4-448b-45a9-880d-5d223cd09c3d": Phase="Pending", Reason="", readiness=false. Elapsed: 28.380341ms
Mar 27 21:19:10.697: INFO: Pod "pod-338695c4-448b-45a9-880d-5d223cd09c3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058727882s
STEP: Saw pod success
Mar 27 21:19:10.697: INFO: Pod "pod-338695c4-448b-45a9-880d-5d223cd09c3d" satisfied condition "Succeeded or Failed"
Mar 27 21:19:10.726: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-338695c4-448b-45a9-880d-5d223cd09c3d container test-container: <nil>
STEP: delete the pod
Mar 27 21:19:10.815: INFO: Waiting for pod pod-338695c4-448b-45a9-880d-5d223cd09c3d to disappear
Mar 27 21:19:10.846: INFO: Pod pod-338695c4-448b-45a9-880d-5d223cd09c3d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 27 21:19:10.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9905" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":66,"skipped":944,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 28 lines ...
Mar 27 21:19:19.258: INFO: stderr: ""
Mar 27 21:19:19.258: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 27 21:19:19.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-759" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":283,"completed":67,"skipped":954,"failed":0}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 35 lines ...

W0327 21:19:30.005156   24935 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 27 21:19:30.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7172" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":283,"completed":68,"skipped":957,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-95ca82a6-c4d8-4191-b6a8-c0b67e29f48a
STEP: Creating a pod to test consume configMaps
Mar 27 21:19:30.270: INFO: Waiting up to 5m0s for pod "pod-configmaps-6626a32d-9d73-436a-aca2-082f0ae7d860" in namespace "configmap-4313" to be "Succeeded or Failed"
Mar 27 21:19:30.301: INFO: Pod "pod-configmaps-6626a32d-9d73-436a-aca2-082f0ae7d860": Phase="Pending", Reason="", readiness=false. Elapsed: 31.493992ms
Mar 27 21:19:32.332: INFO: Pod "pod-configmaps-6626a32d-9d73-436a-aca2-082f0ae7d860": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062461529s
STEP: Saw pod success
Mar 27 21:19:32.332: INFO: Pod "pod-configmaps-6626a32d-9d73-436a-aca2-082f0ae7d860" satisfied condition "Succeeded or Failed"
Mar 27 21:19:32.361: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-configmaps-6626a32d-9d73-436a-aca2-082f0ae7d860 container configmap-volume-test: <nil>
STEP: delete the pod
Mar 27 21:19:32.452: INFO: Waiting for pod pod-configmaps-6626a32d-9d73-436a-aca2-082f0ae7d860 to disappear
Mar 27 21:19:32.481: INFO: Pod pod-configmaps-6626a32d-9d73-436a-aca2-082f0ae7d860 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 27 21:19:32.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4313" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":69,"skipped":1004,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-4f7a0e60-887e-44a6-a93d-0e698011ca97
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 27 21:19:37.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3346" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":70,"skipped":1005,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 10 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 27 21:19:37.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-556" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":283,"completed":71,"skipped":1024,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 27 21:19:37.598: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on tmpfs
Mar 27 21:19:37.774: INFO: Waiting up to 5m0s for pod "pod-a6976c80-bf47-4abd-86ca-885a9a044c0a" in namespace "emptydir-7283" to be "Succeeded or Failed"
Mar 27 21:19:37.804: INFO: Pod "pod-a6976c80-bf47-4abd-86ca-885a9a044c0a": Phase="Pending", Reason="", readiness=false. Elapsed: 29.412127ms
Mar 27 21:19:39.833: INFO: Pod "pod-a6976c80-bf47-4abd-86ca-885a9a044c0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059134604s
STEP: Saw pod success
Mar 27 21:19:39.833: INFO: Pod "pod-a6976c80-bf47-4abd-86ca-885a9a044c0a" satisfied condition "Succeeded or Failed"
Mar 27 21:19:39.862: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-a6976c80-bf47-4abd-86ca-885a9a044c0a container test-container: <nil>
STEP: delete the pod
Mar 27 21:19:39.938: INFO: Waiting for pod pod-a6976c80-bf47-4abd-86ca-885a9a044c0a to disappear
Mar 27 21:19:39.969: INFO: Pod pod-a6976c80-bf47-4abd-86ca-885a9a044c0a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 27 21:19:39.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7283" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":72,"skipped":1035,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 27 21:19:42.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9350" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":73,"skipped":1044,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 121 lines ...
Mar 27 21:19:59.539: INFO: stderr: ""
Mar 27 21:19:59.539: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 27 21:19:59.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3623" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":283,"completed":74,"skipped":1046,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Mar 27 21:19:59.629: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's command
Mar 27 21:19:59.788: INFO: Waiting up to 5m0s for pod "var-expansion-406082d1-2191-4e54-a9fe-c2356d5e4f13" in namespace "var-expansion-5567" to be "Succeeded or Failed"
Mar 27 21:19:59.819: INFO: Pod "var-expansion-406082d1-2191-4e54-a9fe-c2356d5e4f13": Phase="Pending", Reason="", readiness=false. Elapsed: 30.789192ms
Mar 27 21:20:01.848: INFO: Pod "var-expansion-406082d1-2191-4e54-a9fe-c2356d5e4f13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06025379s
STEP: Saw pod success
Mar 27 21:20:01.848: INFO: Pod "var-expansion-406082d1-2191-4e54-a9fe-c2356d5e4f13" satisfied condition "Succeeded or Failed"
Mar 27 21:20:01.877: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod var-expansion-406082d1-2191-4e54-a9fe-c2356d5e4f13 container dapi-container: <nil>
STEP: delete the pod
Mar 27 21:20:01.951: INFO: Waiting for pod var-expansion-406082d1-2191-4e54-a9fe-c2356d5e4f13 to disappear
Mar 27 21:20:01.981: INFO: Pod var-expansion-406082d1-2191-4e54-a9fe-c2356d5e4f13 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 27 21:20:01.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5567" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":283,"completed":75,"skipped":1090,"failed":0}
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 27 21:20:02.082: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:135
[It] should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Mar 27 21:20:02.439: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 27 21:20:02.439: INFO: DaemonSet pods can't tolerate node test1-controlplane-1.c.k8s-jkns-gci-gce-1-3.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 27 21:20:02.439: INFO: DaemonSet pods can't tolerate node test1-controlplane-2.c.k8s-jkns-gci-gce-1-3.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 27 21:20:02.473: INFO: Number of nodes with available pods: 0
Mar 27 21:20:02.473: INFO: Node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal is running more than one daemon pod
Mar 27 21:20:03.528: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 27 21:20:03.528: INFO: DaemonSet pods can't tolerate node test1-controlplane-1.c.k8s-jkns-gci-gce-1-3.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 27 21:20:03.528: INFO: DaemonSet pods can't tolerate node test1-controlplane-2.c.k8s-jkns-gci-gce-1-3.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 27 21:20:03.557: INFO: Number of nodes with available pods: 2
Mar 27 21:20:03.557: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Mar 27 21:20:03.688: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 27 21:20:03.688: INFO: DaemonSet pods can't tolerate node test1-controlplane-1.c.k8s-jkns-gci-gce-1-3.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 27 21:20:03.688: INFO: DaemonSet pods can't tolerate node test1-controlplane-2.c.k8s-jkns-gci-gce-1-3.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 27 21:20:03.720: INFO: Number of nodes with available pods: 1
Mar 27 21:20:03.720: INFO: Node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal is running more than one daemon pod
Mar 27 21:20:04.775: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
... skipping 3 lines ...
Mar 27 21:20:04.804: INFO: Node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal is running more than one daemon pod
Mar 27 21:20:05.774: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-jkns-gci-gce-1-3.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 27 21:20:05.774: INFO: DaemonSet pods can't tolerate node test1-controlplane-1.c.k8s-jkns-gci-gce-1-3.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 27 21:20:05.774: INFO: DaemonSet pods can't tolerate node test1-controlplane-2.c.k8s-jkns-gci-gce-1-3.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 27 21:20:05.804: INFO: Number of nodes with available pods: 2
Mar 27 21:20:05.805: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:101
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5301, will wait for the garbage collector to delete the pods
Mar 27 21:20:05.980: INFO: Deleting DaemonSet.extensions daemon-set took: 35.286405ms
Mar 27 21:20:06.080: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.224165ms
... skipping 4 lines ...
Mar 27 21:20:16.672: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5301/pods","resourceVersion":"9185"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 27 21:20:16.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5301" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":283,"completed":76,"skipped":1096,"failed":0}

------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 27 21:20:16.856: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name secret-emptykey-test-0a2370b9-227f-4f67-9f72-15ce76c5cf9a
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Mar 27 21:20:17.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9790" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":283,"completed":77,"skipped":1096,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 27 21:20:17.235: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-9bd7d5e5-684b-4a32-b722-c12c659442e1" in namespace "security-context-test-7102" to be "Succeeded or Failed"
Mar 27 21:20:17.264: INFO: Pod "alpine-nnp-false-9bd7d5e5-684b-4a32-b722-c12c659442e1": Phase="Pending", Reason="", readiness=false. Elapsed: 28.685826ms
Mar 27 21:20:19.294: INFO: Pod "alpine-nnp-false-9bd7d5e5-684b-4a32-b722-c12c659442e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05814797s
Mar 27 21:20:21.323: INFO: Pod "alpine-nnp-false-9bd7d5e5-684b-4a32-b722-c12c659442e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087856358s
Mar 27 21:20:21.323: INFO: Pod "alpine-nnp-false-9bd7d5e5-684b-4a32-b722-c12c659442e1" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 27 21:20:21.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7102" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":78,"skipped":1108,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 27 21:20:21.576: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 27 21:20:22.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4710" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":283,"completed":79,"skipped":1169,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 15 lines ...
  test/e2e/framework/framework.go:175
Mar 27 21:20:29.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8432" for this suite.
STEP: Destroying namespace "nsdeletetest-3930" for this suite.
Mar 27 21:20:29.796: INFO: Namespace nsdeletetest-3930 was already deleted
STEP: Destroying namespace "nsdeletetest-801" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":283,"completed":80,"skipped":1212,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-078e1fa4-0d73-4b6a-8bfe-18282e06c865
STEP: Creating a pod to test consume secrets
Mar 27 21:20:30.036: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c1f7e8b8-9442-4225-a5b4-09899d39f1ee" in namespace "projected-7016" to be "Succeeded or Failed"
Mar 27 21:20:30.069: INFO: Pod "pod-projected-secrets-c1f7e8b8-9442-4225-a5b4-09899d39f1ee": Phase="Pending", Reason="", readiness=false. Elapsed: 32.281895ms
Mar 27 21:20:32.098: INFO: Pod "pod-projected-secrets-c1f7e8b8-9442-4225-a5b4-09899d39f1ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062256618s
STEP: Saw pod success
Mar 27 21:20:32.099: INFO: Pod "pod-projected-secrets-c1f7e8b8-9442-4225-a5b4-09899d39f1ee" satisfied condition "Succeeded or Failed"
Mar 27 21:20:32.127: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-projected-secrets-c1f7e8b8-9442-4225-a5b4-09899d39f1ee container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 27 21:20:32.203: INFO: Waiting for pod pod-projected-secrets-c1f7e8b8-9442-4225-a5b4-09899d39f1ee to disappear
Mar 27 21:20:32.232: INFO: Pod pod-projected-secrets-c1f7e8b8-9442-4225-a5b4-09899d39f1ee no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 27 21:20:32.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7016" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":81,"skipped":1228,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 27 21:20:46.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1873" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":283,"completed":82,"skipped":1261,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 16 lines ...
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 27 21:20:56.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-800" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":283,"completed":83,"skipped":1280,"failed":0}

------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 27 21:20:56.856: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25cc9a26-7094-4364-aeca-ec13521ca753" in namespace "downward-api-6423" to be "Succeeded or Failed"
Mar 27 21:20:56.885: INFO: Pod "downwardapi-volume-25cc9a26-7094-4364-aeca-ec13521ca753": Phase="Pending", Reason="", readiness=false. Elapsed: 29.034123ms
Mar 27 21:20:58.914: INFO: Pod "downwardapi-volume-25cc9a26-7094-4364-aeca-ec13521ca753": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058565431s
STEP: Saw pod success
Mar 27 21:20:58.914: INFO: Pod "downwardapi-volume-25cc9a26-7094-4364-aeca-ec13521ca753" satisfied condition "Succeeded or Failed"
Mar 27 21:20:58.943: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod downwardapi-volume-25cc9a26-7094-4364-aeca-ec13521ca753 container client-container: <nil>
STEP: delete the pod
Mar 27 21:20:59.028: INFO: Waiting for pod downwardapi-volume-25cc9a26-7094-4364-aeca-ec13521ca753 to disappear
Mar 27 21:20:59.057: INFO: Pod downwardapi-volume-25cc9a26-7094-4364-aeca-ec13521ca753 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 27 21:20:59.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6423" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":283,"completed":84,"skipped":1280,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Mar 27 21:20:59.471: INFO: stderr: ""
Mar 27 21:20:59.471: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://35.244.159.35:443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://35.244.159.35:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 27 21:20:59.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5881" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":283,"completed":85,"skipped":1342,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 27 21:20:59.544: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Mar 27 21:21:05.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7598" for this suite.
•{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":283,"completed":86,"skipped":1361,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
  test/e2e/framework/framework.go:175
Mar 27 21:21:09.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5769" for this suite.
STEP: Destroying namespace "webhook-5769-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":283,"completed":87,"skipped":1363,"failed":0}
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 27 21:21:10.066: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 27 21:21:10.241: INFO: Waiting up to 5m0s for pod "downward-api-5cbd5683-85f9-4f60-85e3-c99634d40173" in namespace "downward-api-3056" to be "Succeeded or Failed"
Mar 27 21:21:10.272: INFO: Pod "downward-api-5cbd5683-85f9-4f60-85e3-c99634d40173": Phase="Pending", Reason="", readiness=false. Elapsed: 30.350394ms
Mar 27 21:21:12.301: INFO: Pod "downward-api-5cbd5683-85f9-4f60-85e3-c99634d40173": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059756617s
STEP: Saw pod success
Mar 27 21:21:12.301: INFO: Pod "downward-api-5cbd5683-85f9-4f60-85e3-c99634d40173" satisfied condition "Succeeded or Failed"
Mar 27 21:21:12.330: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod downward-api-5cbd5683-85f9-4f60-85e3-c99634d40173 container dapi-container: <nil>
STEP: delete the pod
Mar 27 21:21:12.409: INFO: Waiting for pod downward-api-5cbd5683-85f9-4f60-85e3-c99634d40173 to disappear
Mar 27 21:21:12.438: INFO: Pod downward-api-5cbd5683-85f9-4f60-85e3-c99634d40173 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 27 21:21:12.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3056" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":283,"completed":88,"skipped":1372,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 7 lines ...
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 27 21:22:12.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9087" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":283,"completed":89,"skipped":1459,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-1fae98ab-f92b-4a0d-a895-140ecd32ceca
STEP: Creating a pod to test consume secrets
Mar 27 21:22:13.011: INFO: Waiting up to 5m0s for pod "pod-secrets-15d27952-fa0f-4783-9081-0dc0c73d34c5" in namespace "secrets-8329" to be "Succeeded or Failed"
Mar 27 21:22:13.046: INFO: Pod "pod-secrets-15d27952-fa0f-4783-9081-0dc0c73d34c5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.621649ms
Mar 27 21:22:15.076: INFO: Pod "pod-secrets-15d27952-fa0f-4783-9081-0dc0c73d34c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064469107s
STEP: Saw pod success
Mar 27 21:22:15.076: INFO: Pod "pod-secrets-15d27952-fa0f-4783-9081-0dc0c73d34c5" satisfied condition "Succeeded or Failed"
Mar 27 21:22:15.104: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-secrets-15d27952-fa0f-4783-9081-0dc0c73d34c5 container secret-env-test: <nil>
STEP: delete the pod
Mar 27 21:22:15.184: INFO: Waiting for pod pod-secrets-15d27952-fa0f-4783-9081-0dc0c73d34c5 to disappear
Mar 27 21:22:15.213: INFO: Pod pod-secrets-15d27952-fa0f-4783-9081-0dc0c73d34c5 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Mar 27 21:22:15.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8329" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":283,"completed":90,"skipped":1468,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 27 21:22:17.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7452" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":283,"completed":91,"skipped":1483,"failed":0}

------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 27 21:22:17.963: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
Mar 27 21:24:18.226: INFO: Deleting pod "var-expansion-baa889b3-3614-4996-873b-10ec19c7d9ac" in namespace "var-expansion-6930"
Mar 27 21:24:18.263: INFO: Wait up to 5m0s for pod "var-expansion-baa889b3-3614-4996-873b-10ec19c7d9ac" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 27 21:24:20.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6930" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":283,"completed":92,"skipped":1483,"failed":0}

------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Lease
... skipping 5 lines ...
[It] lease API should be available [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Lease
  test/e2e/framework/framework.go:175
Mar 27 21:24:20.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-6743" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":283,"completed":93,"skipped":1483,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 60 lines ...
Mar 27 21:24:28.625: INFO: stderr: ""
Mar 27 21:24:28.625: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 27 21:24:28.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3866" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":283,"completed":94,"skipped":1491,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should be possible to delete [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 27 21:24:28.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-667" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":283,"completed":95,"skipped":1514,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 27 21:24:29.150: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf84f9f8-0080-4a95-9a15-12bd326dca46" in namespace "projected-8095" to be "Succeeded or Failed"
Mar 27 21:24:29.183: INFO: Pod "downwardapi-volume-cf84f9f8-0080-4a95-9a15-12bd326dca46": Phase="Pending", Reason="", readiness=false. Elapsed: 32.23596ms
Mar 27 21:24:31.212: INFO: Pod "downwardapi-volume-cf84f9f8-0080-4a95-9a15-12bd326dca46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062028374s
STEP: Saw pod success
Mar 27 21:24:31.212: INFO: Pod "downwardapi-volume-cf84f9f8-0080-4a95-9a15-12bd326dca46" satisfied condition "Succeeded or Failed"
Mar 27 21:24:31.242: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod downwardapi-volume-cf84f9f8-0080-4a95-9a15-12bd326dca46 container client-container: <nil>
STEP: delete the pod
Mar 27 21:24:31.327: INFO: Waiting for pod downwardapi-volume-cf84f9f8-0080-4a95-9a15-12bd326dca46 to disappear
Mar 27 21:24:31.358: INFO: Pod downwardapi-volume-cf84f9f8-0080-4a95-9a15-12bd326dca46 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 27 21:24:31.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8095" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":283,"completed":96,"skipped":1577,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Mar 27 21:24:37.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5729" for this suite.
STEP: Destroying namespace "webhook-5729-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":283,"completed":97,"skipped":1580,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 10 lines ...
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Mar 27 21:24:37.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9498" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":283,"completed":98,"skipped":1611,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Mar 27 21:24:40.235: INFO: Initial restart count of pod liveness-332581db-d1e2-4c41-a77c-86b24e41edc5 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 27 21:28:41.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-967" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":283,"completed":99,"skipped":1639,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 27 21:28:44.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8046" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":283,"completed":100,"skipped":1657,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 27 21:28:55.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9162" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":283,"completed":101,"skipped":1670,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Mar 27 21:28:58.260: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:28:58.291: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:28:58.385: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:28:58.416: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:28:58.447: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:28:58.478: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:28:58.539: INFO: Lookups using dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4296.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4296.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local jessie_udp@dns-test-service-2.dns-4296.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4296.svc.cluster.local]

Mar 27 21:29:03.570: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:03.601: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:03.632: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:03.664: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:03.756: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:03.788: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:03.819: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:03.850: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:03.925: INFO: Lookups using dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4296.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4296.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local jessie_udp@dns-test-service-2.dns-4296.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4296.svc.cluster.local]

Mar 27 21:29:08.570: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:08.601: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:08.631: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:08.663: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:08.754: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:08.785: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:08.817: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:08.848: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:08.911: INFO: Lookups using dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4296.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4296.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local jessie_udp@dns-test-service-2.dns-4296.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4296.svc.cluster.local]

Mar 27 21:29:13.570: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:13.601: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:13.632: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:13.663: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:13.756: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:13.787: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:13.818: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:13.849: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:13.918: INFO: Lookups using dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4296.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4296.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local jessie_udp@dns-test-service-2.dns-4296.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4296.svc.cluster.local]

Mar 27 21:29:18.570: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:18.601: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:18.632: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:18.663: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:18.754: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:18.785: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:18.815: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:18.846: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:18.910: INFO: Lookups using dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4296.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4296.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local jessie_udp@dns-test-service-2.dns-4296.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4296.svc.cluster.local]

Mar 27 21:29:23.571: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:23.601: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:23.633: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:23.664: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:23.757: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:23.788: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:23.819: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:23.851: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4296.svc.cluster.local from pod dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4: the server could not find the requested resource (get pods dns-test-e88afdc0-d699-4a93-896a-343f811259c4)
Mar 27 21:29:23.924: INFO: Lookups using dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4296.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4296.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4296.svc.cluster.local jessie_udp@dns-test-service-2.dns-4296.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4296.svc.cluster.local]

Mar 27 21:29:28.914: INFO: DNS probes using dns-4296/dns-test-e88afdc0-d699-4a93-896a-343f811259c4 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 27 21:29:29.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4296" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":283,"completed":102,"skipped":1685,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 26 lines ...
Mar 27 21:30:19.579: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-653 /api/v1/namespaces/watch-653/configmaps/e2e-watch-test-configmap-b ead70af9-8c42-4677-8eb2-e309678bac06 11450 0 2020-03-27 21:30:09 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 27 21:30:19.579: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-653 /api/v1/namespaces/watch-653/configmaps/e2e-watch-test-configmap-b ead70af9-8c42-4677-8eb2-e309678bac06 11450 0 2020-03-27 21:30:09 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 27 21:30:29.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-653" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":283,"completed":103,"skipped":1701,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 9 lines ...
STEP: creating pod
Mar 27 21:30:31.951: INFO: Pod pod-hostip-32e905f7-02e1-47f1-8b4b-a1c3578b147c has hostIP: 10.150.0.6
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 27 21:30:31.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7923" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":283,"completed":104,"skipped":1725,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 26 lines ...
  test/e2e/framework/framework.go:175
Mar 27 21:30:38.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-896" for this suite.
STEP: Destroying namespace "webhook-896-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":283,"completed":105,"skipped":1775,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 19 lines ...
Mar 27 21:30:41.990: INFO: Deleting pod "var-expansion-9311adb0-1d6d-4a82-b7fe-7d9498375e30" in namespace "var-expansion-5476"
Mar 27 21:30:42.026: INFO: Wait up to 5m0s for pod "var-expansion-9311adb0-1d6d-4a82-b7fe-7d9498375e30" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 27 21:31:28.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5476" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":283,"completed":106,"skipped":1784,"failed":0}
SSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-3676/configmap-test-50ba16b9-0287-4057-be3b-2a93fc7d6933
STEP: Creating a pod to test consume configMaps
Mar 27 21:31:28.368: INFO: Waiting up to 5m0s for pod "pod-configmaps-f4ba747c-841f-4cbf-8ae9-479ca0a5cdac" in namespace "configmap-3676" to be "Succeeded or Failed"
Mar 27 21:31:28.400: INFO: Pod "pod-configmaps-f4ba747c-841f-4cbf-8ae9-479ca0a5cdac": Phase="Pending", Reason="", readiness=false. Elapsed: 32.728789ms
Mar 27 21:31:30.430: INFO: Pod "pod-configmaps-f4ba747c-841f-4cbf-8ae9-479ca0a5cdac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062593519s
STEP: Saw pod success
Mar 27 21:31:30.430: INFO: Pod "pod-configmaps-f4ba747c-841f-4cbf-8ae9-479ca0a5cdac" satisfied condition "Succeeded or Failed"
Mar 27 21:31:30.460: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-configmaps-f4ba747c-841f-4cbf-8ae9-479ca0a5cdac container env-test: <nil>
STEP: delete the pod
Mar 27 21:31:30.545: INFO: Waiting for pod pod-configmaps-f4ba747c-841f-4cbf-8ae9-479ca0a5cdac to disappear
Mar 27 21:31:30.576: INFO: Pod pod-configmaps-f4ba747c-841f-4cbf-8ae9-479ca0a5cdac no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Mar 27 21:31:30.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3676" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":283,"completed":107,"skipped":1792,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Updating configmap configmap-test-upd-48dcff93-555e-4a3e-97aa-5ef7e0e4ee7a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 27 21:31:37.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9750" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":108,"skipped":1798,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 27 21:31:37.436: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9deb6672-df02-417d-a597-805330bd4882" in namespace "projected-6223" to be "Succeeded or Failed"
Mar 27 21:31:37.465: INFO: Pod "downwardapi-volume-9deb6672-df02-417d-a597-805330bd4882": Phase="Pending", Reason="", readiness=false. Elapsed: 28.962567ms
Mar 27 21:31:39.495: INFO: Pod "downwardapi-volume-9deb6672-df02-417d-a597-805330bd4882": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058507863s
STEP: Saw pod success
Mar 27 21:31:39.495: INFO: Pod "downwardapi-volume-9deb6672-df02-417d-a597-805330bd4882" satisfied condition "Succeeded or Failed"
Mar 27 21:31:39.526: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod downwardapi-volume-9deb6672-df02-417d-a597-805330bd4882 container client-container: <nil>
STEP: delete the pod
Mar 27 21:31:39.602: INFO: Waiting for pod downwardapi-volume-9deb6672-df02-417d-a597-805330bd4882 to disappear
Mar 27 21:31:39.632: INFO: Pod downwardapi-volume-9deb6672-df02-417d-a597-805330bd4882 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 27 21:31:39.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6223" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":283,"completed":109,"skipped":1814,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Mar 27 21:31:46.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3198" for this suite.
STEP: Destroying namespace "webhook-3198-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":283,"completed":110,"skipped":1817,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 22 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 27 21:31:54.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3557" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":283,"completed":111,"skipped":1844,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 27 21:31:59.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7964" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":283,"completed":112,"skipped":1849,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 34 lines ...
Mar 27 21:34:11.782: INFO: Deleting pod "var-expansion-83c006ef-77d6-47f0-b4b2-ae60091fef90" in namespace "var-expansion-2787"
Mar 27 21:34:11.820: INFO: Wait up to 5m0s for pod "var-expansion-83c006ef-77d6-47f0-b4b2-ae60091fef90" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 27 21:34:47.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2787" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":283,"completed":113,"skipped":1858,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 27 21:34:59.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-526" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":283,"completed":114,"skipped":1870,"failed":0}

------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 12 lines ...
STEP: reading a file in the container
Mar 27 21:35:03.159: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl exec --namespace=svcaccounts-9241 pod-service-account-c92ad2ae-6ddb-40ec-aa8a-624b4a6762b4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
Mar 27 21:35:03.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9241" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":283,"completed":115,"skipped":1870,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 39 lines ...
Mar 27 21:35:11.247: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig explain e2e-test-crd-publish-openapi-2171-crds.spec'
Mar 27 21:35:11.552: INFO: stderr: ""
Mar 27 21:35:11.552: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2171-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Mar 27 21:35:11.552: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig explain e2e-test-crd-publish-openapi-2171-crds.spec.bars'
Mar 27 21:35:11.846: INFO: stderr: ""
Mar 27 21:35:11.846: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2171-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Mar 27 21:35:11.846: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig explain e2e-test-crd-publish-openapi-2171-crds.spec.bars2'
Mar 27 21:35:12.131: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 27 21:35:15.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5313" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":283,"completed":116,"skipped":1884,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 5 lines ...
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 27 21:35:17.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5605" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":283,"completed":117,"skipped":1891,"failed":0}

------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-5b461253-91ed-481a-a34d-24203b38f206
STEP: Creating a pod to test consume configMaps
Mar 27 21:35:18.030: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-effc3dfa-4fc1-4bb7-a360-ae42f4fd0cd6" in namespace "projected-4190" to be "Succeeded or Failed"
Mar 27 21:35:18.060: INFO: Pod "pod-projected-configmaps-effc3dfa-4fc1-4bb7-a360-ae42f4fd0cd6": Phase="Pending", Reason="", readiness=false. Elapsed: 29.178699ms
Mar 27 21:35:20.089: INFO: Pod "pod-projected-configmaps-effc3dfa-4fc1-4bb7-a360-ae42f4fd0cd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058828614s
STEP: Saw pod success
Mar 27 21:35:20.089: INFO: Pod "pod-projected-configmaps-effc3dfa-4fc1-4bb7-a360-ae42f4fd0cd6" satisfied condition "Succeeded or Failed"
Mar 27 21:35:20.118: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-projected-configmaps-effc3dfa-4fc1-4bb7-a360-ae42f4fd0cd6 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 27 21:35:20.205: INFO: Waiting for pod pod-projected-configmaps-effc3dfa-4fc1-4bb7-a360-ae42f4fd0cd6 to disappear
Mar 27 21:35:20.236: INFO: Pod pod-projected-configmaps-effc3dfa-4fc1-4bb7-a360-ae42f4fd0cd6 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 27 21:35:20.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4190" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":118,"skipped":1891,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Mar 27 21:35:22.590: INFO: Initial restart count of pod test-webserver-bf52db7d-80e6-4cd5-bc77-c259a33f0b50 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 27 21:39:24.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8884" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":283,"completed":119,"skipped":1904,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Mar 27 21:39:24.624: INFO: stderr: ""
Mar 27 21:39:24.624: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncrd.projectcalico.org/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 27 21:39:24.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3328" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":283,"completed":120,"skipped":1911,"failed":0}
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-1ee69c0d-12c0-41a6-823d-382e268af928
STEP: Creating a pod to test consume configMaps
Mar 27 21:39:24.902: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-81211c16-1925-42f5-8d22-214d65359b59" in namespace "projected-9029" to be "Succeeded or Failed"
Mar 27 21:39:24.931: INFO: Pod "pod-projected-configmaps-81211c16-1925-42f5-8d22-214d65359b59": Phase="Pending", Reason="", readiness=false. Elapsed: 28.427541ms
Mar 27 21:39:26.960: INFO: Pod "pod-projected-configmaps-81211c16-1925-42f5-8d22-214d65359b59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.057739922s
STEP: Saw pod success
Mar 27 21:39:26.960: INFO: Pod "pod-projected-configmaps-81211c16-1925-42f5-8d22-214d65359b59" satisfied condition "Succeeded or Failed"
Mar 27 21:39:26.990: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-projected-configmaps-81211c16-1925-42f5-8d22-214d65359b59 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 27 21:39:27.085: INFO: Waiting for pod pod-projected-configmaps-81211c16-1925-42f5-8d22-214d65359b59 to disappear
Mar 27 21:39:27.114: INFO: Pod pod-projected-configmaps-81211c16-1925-42f5-8d22-214d65359b59 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 27 21:39:27.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9029" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":121,"skipped":1914,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-8f5h
STEP: Creating a pod to test atomic-volume-subpath
Mar 27 21:39:27.437: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8f5h" in namespace "subpath-589" to be "Succeeded or Failed"
Mar 27 21:39:27.467: INFO: Pod "pod-subpath-test-configmap-8f5h": Phase="Pending", Reason="", readiness=false. Elapsed: 30.604473ms
Mar 27 21:39:29.496: INFO: Pod "pod-subpath-test-configmap-8f5h": Phase="Running", Reason="", readiness=true. Elapsed: 2.059259584s
Mar 27 21:39:31.525: INFO: Pod "pod-subpath-test-configmap-8f5h": Phase="Running", Reason="", readiness=true. Elapsed: 4.088700103s
Mar 27 21:39:33.555: INFO: Pod "pod-subpath-test-configmap-8f5h": Phase="Running", Reason="", readiness=true. Elapsed: 6.118306288s
Mar 27 21:39:35.585: INFO: Pod "pod-subpath-test-configmap-8f5h": Phase="Running", Reason="", readiness=true. Elapsed: 8.148037208s
Mar 27 21:39:37.615: INFO: Pod "pod-subpath-test-configmap-8f5h": Phase="Running", Reason="", readiness=true. Elapsed: 10.17870383s
Mar 27 21:39:39.645: INFO: Pod "pod-subpath-test-configmap-8f5h": Phase="Running", Reason="", readiness=true. Elapsed: 12.208382087s
Mar 27 21:39:41.676: INFO: Pod "pod-subpath-test-configmap-8f5h": Phase="Running", Reason="", readiness=true. Elapsed: 14.239344334s
Mar 27 21:39:43.706: INFO: Pod "pod-subpath-test-configmap-8f5h": Phase="Running", Reason="", readiness=true. Elapsed: 16.269343693s
Mar 27 21:39:45.735: INFO: Pod "pod-subpath-test-configmap-8f5h": Phase="Running", Reason="", readiness=true. Elapsed: 18.298829248s
Mar 27 21:39:47.766: INFO: Pod "pod-subpath-test-configmap-8f5h": Phase="Running", Reason="", readiness=true. Elapsed: 20.328897246s
Mar 27 21:39:49.795: INFO: Pod "pod-subpath-test-configmap-8f5h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.358208654s
STEP: Saw pod success
Mar 27 21:39:49.795: INFO: Pod "pod-subpath-test-configmap-8f5h" satisfied condition "Succeeded or Failed"
Mar 27 21:39:49.824: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-subpath-test-configmap-8f5h container test-container-subpath-configmap-8f5h: <nil>
STEP: delete the pod
Mar 27 21:39:49.899: INFO: Waiting for pod pod-subpath-test-configmap-8f5h to disappear
Mar 27 21:39:49.928: INFO: Pod pod-subpath-test-configmap-8f5h no longer exists
STEP: Deleting pod pod-subpath-test-configmap-8f5h
Mar 27 21:39:49.928: INFO: Deleting pod "pod-subpath-test-configmap-8f5h" in namespace "subpath-589"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 27 21:39:49.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-589" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":283,"completed":122,"skipped":1941,"failed":0}
SS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 14 lines ...
STEP: verifying the updated pod is in kubernetes
Mar 27 21:39:52.947: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 27 21:39:52.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5397" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":283,"completed":123,"skipped":1943,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 13 lines ...
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 27 21:40:10.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6269" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":283,"completed":124,"skipped":2020,"failed":0}
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 28 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 27 21:40:12.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2855" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":283,"completed":125,"skipped":2024,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 12 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-5054
STEP: Creating statefulset with conflicting port in namespace statefulset-5054
STEP: Waiting until pod test-pod will start running in namespace statefulset-5054
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5054
Mar 27 21:40:14.503: INFO: Observed stateful pod in namespace: statefulset-5054, name: ss-0, uid: 662ff3e5-8977-4057-9d33-aa0eafd211ed, status phase: Pending. Waiting for statefulset controller to delete.
Mar 27 21:40:16.547: INFO: Observed stateful pod in namespace: statefulset-5054, name: ss-0, uid: 662ff3e5-8977-4057-9d33-aa0eafd211ed, status phase: Failed. Waiting for statefulset controller to delete.
Mar 27 21:40:16.558: INFO: Observed stateful pod in namespace: statefulset-5054, name: ss-0, uid: 662ff3e5-8977-4057-9d33-aa0eafd211ed, status phase: Failed. Waiting for statefulset controller to delete.
Mar 27 21:40:16.569: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5054
STEP: Removing pod with conflicting port in namespace statefulset-5054
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5054 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:110
Mar 27 21:40:26.833: INFO: Deleting all statefulset in ns statefulset-5054
Mar 27 21:40:26.862: INFO: Scaling statefulset ss to 0
Mar 27 21:40:36.991: INFO: Waiting for statefulset status.replicas updated to 0
Mar 27 21:40:37.024: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 27 21:40:37.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5054" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":283,"completed":126,"skipped":2034,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 9 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 27 21:40:37.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7110" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":283,"completed":127,"skipped":2084,"failed":0}
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 22 lines ...
Mar 27 21:41:08.024: INFO: Waiting for statefulset status.replicas updated to 0
Mar 27 21:41:08.052: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 27 21:41:08.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8146" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":283,"completed":128,"skipped":2089,"failed":0}
SSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Mar 27 21:41:26.754: INFO: Restart count of pod container-probe-2805/liveness-6e589f8e-480a-49d1-a8b5-dc75eda047b8 is now 1 (16.265294318s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 27 21:41:26.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2805" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":283,"completed":129,"skipped":2093,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 27 21:41:27.015: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 27 21:41:27.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5906" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":283,"completed":130,"skipped":2101,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 27 21:41:39.964: INFO: File wheezy_udp@dns-test-service-3.dns-3321.svc.cluster.local from pod  dns-3321/dns-test-55d47528-c694-4740-a03e-26c412bd9114 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 27 21:41:39.994: INFO: File jessie_udp@dns-test-service-3.dns-3321.svc.cluster.local from pod  dns-3321/dns-test-55d47528-c694-4740-a03e-26c412bd9114 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 27 21:41:39.994: INFO: Lookups using dns-3321/dns-test-55d47528-c694-4740-a03e-26c412bd9114 failed for: [wheezy_udp@dns-test-service-3.dns-3321.svc.cluster.local jessie_udp@dns-test-service-3.dns-3321.svc.cluster.local]

Mar 27 21:41:45.027: INFO: File wheezy_udp@dns-test-service-3.dns-3321.svc.cluster.local from pod  dns-3321/dns-test-55d47528-c694-4740-a03e-26c412bd9114 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 27 21:41:45.058: INFO: File jessie_udp@dns-test-service-3.dns-3321.svc.cluster.local from pod  dns-3321/dns-test-55d47528-c694-4740-a03e-26c412bd9114 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 27 21:41:45.058: INFO: Lookups using dns-3321/dns-test-55d47528-c694-4740-a03e-26c412bd9114 failed for: [wheezy_udp@dns-test-service-3.dns-3321.svc.cluster.local jessie_udp@dns-test-service-3.dns-3321.svc.cluster.local]

Mar 27 21:41:50.025: INFO: File wheezy_udp@dns-test-service-3.dns-3321.svc.cluster.local from pod  dns-3321/dns-test-55d47528-c694-4740-a03e-26c412bd9114 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 27 21:41:50.055: INFO: File jessie_udp@dns-test-service-3.dns-3321.svc.cluster.local from pod  dns-3321/dns-test-55d47528-c694-4740-a03e-26c412bd9114 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 27 21:41:50.055: INFO: Lookups using dns-3321/dns-test-55d47528-c694-4740-a03e-26c412bd9114 failed for: [wheezy_udp@dns-test-service-3.dns-3321.svc.cluster.local jessie_udp@dns-test-service-3.dns-3321.svc.cluster.local]

Mar 27 21:41:55.026: INFO: File wheezy_udp@dns-test-service-3.dns-3321.svc.cluster.local from pod  dns-3321/dns-test-55d47528-c694-4740-a03e-26c412bd9114 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 27 21:41:55.057: INFO: File jessie_udp@dns-test-service-3.dns-3321.svc.cluster.local from pod  dns-3321/dns-test-55d47528-c694-4740-a03e-26c412bd9114 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 27 21:41:55.057: INFO: Lookups using dns-3321/dns-test-55d47528-c694-4740-a03e-26c412bd9114 failed for: [wheezy_udp@dns-test-service-3.dns-3321.svc.cluster.local jessie_udp@dns-test-service-3.dns-3321.svc.cluster.local]

Mar 27 21:42:00.057: INFO: DNS probes using dns-test-55d47528-c694-4740-a03e-26c412bd9114 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3321.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3321.svc.cluster.local; sleep 1; done
... skipping 9 lines ...
STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 27 21:42:02.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3321" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":283,"completed":131,"skipped":2114,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 27 21:42:02.595: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 27 21:42:02.762: INFO: Waiting up to 5m0s for pod "pod-a87cc566-dd7e-461a-b22c-d3f523b4a1e9" in namespace "emptydir-3661" to be "Succeeded or Failed"
Mar 27 21:42:02.790: INFO: Pod "pod-a87cc566-dd7e-461a-b22c-d3f523b4a1e9": Phase="Pending", Reason="", readiness=false. Elapsed: 28.502231ms
Mar 27 21:42:04.819: INFO: Pod "pod-a87cc566-dd7e-461a-b22c-d3f523b4a1e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.057605943s
STEP: Saw pod success
Mar 27 21:42:04.819: INFO: Pod "pod-a87cc566-dd7e-461a-b22c-d3f523b4a1e9" satisfied condition "Succeeded or Failed"
Mar 27 21:42:04.848: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-a87cc566-dd7e-461a-b22c-d3f523b4a1e9 container test-container: <nil>
STEP: delete the pod
Mar 27 21:42:04.940: INFO: Waiting for pod pod-a87cc566-dd7e-461a-b22c-d3f523b4a1e9 to disappear
Mar 27 21:42:04.970: INFO: Pod pod-a87cc566-dd7e-461a-b22c-d3f523b4a1e9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 27 21:42:04.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3661" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":132,"skipped":2133,"failed":0}
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 21 lines ...
Mar 27 21:42:17.546: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 27 21:42:17.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8659" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":283,"completed":133,"skipped":2136,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 22 lines ...
Mar 27 21:42:43.902: INFO: The status of Pod test-webserver-d72b9ffe-69a6-42b8-b63d-fd471458ff16 is Running (Ready = true)
Mar 27 21:42:43.931: INFO: Container started at 2020-03-27 21:42:18 +0000 UTC, pod became ready at 2020-03-27 21:42:42 +0000 UTC
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 27 21:42:43.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4163" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":283,"completed":134,"skipped":2147,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 8 lines ...
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 27 21:42:46.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7734" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":283,"completed":135,"skipped":2160,"failed":0}
S
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 27 21:42:48.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8243" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":136,"skipped":2161,"failed":0}

------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 12 lines ...
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 27 21:42:49.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6691" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":283,"completed":137,"skipped":2161,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 27 21:43:05.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3154" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":283,"completed":138,"skipped":2186,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-c9558149-6b37-4d13-9001-22763ac4739a
STEP: Creating a pod to test consume secrets
Mar 27 21:43:05.998: INFO: Waiting up to 5m0s for pod "pod-secrets-54cfc838-d444-4e7a-81e7-4cb954d37716" in namespace "secrets-9312" to be "Succeeded or Failed"
Mar 27 21:43:06.029: INFO: Pod "pod-secrets-54cfc838-d444-4e7a-81e7-4cb954d37716": Phase="Pending", Reason="", readiness=false. Elapsed: 30.275895ms
Mar 27 21:43:08.058: INFO: Pod "pod-secrets-54cfc838-d444-4e7a-81e7-4cb954d37716": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059548412s
STEP: Saw pod success
Mar 27 21:43:08.058: INFO: Pod "pod-secrets-54cfc838-d444-4e7a-81e7-4cb954d37716" satisfied condition "Succeeded or Failed"
Mar 27 21:43:08.087: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-secrets-54cfc838-d444-4e7a-81e7-4cb954d37716 container secret-volume-test: <nil>
STEP: delete the pod
Mar 27 21:43:08.165: INFO: Waiting for pod pod-secrets-54cfc838-d444-4e7a-81e7-4cb954d37716 to disappear
Mar 27 21:43:08.196: INFO: Pod pod-secrets-54cfc838-d444-4e7a-81e7-4cb954d37716 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 27 21:43:08.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9312" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":139,"skipped":2195,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 27 21:43:25.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6394" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":283,"completed":140,"skipped":2273,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-9b648ff9-45d9-4882-8909-94a27c21d311
STEP: Creating a pod to test consume configMaps
Mar 27 21:43:26.104: INFO: Waiting up to 5m0s for pod "pod-configmaps-6f5bd780-fdfa-4119-9a51-bcd817ca6029" in namespace "configmap-1421" to be "Succeeded or Failed"
Mar 27 21:43:26.136: INFO: Pod "pod-configmaps-6f5bd780-fdfa-4119-9a51-bcd817ca6029": Phase="Pending", Reason="", readiness=false. Elapsed: 31.990812ms
Mar 27 21:43:28.166: INFO: Pod "pod-configmaps-6f5bd780-fdfa-4119-9a51-bcd817ca6029": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061993209s
STEP: Saw pod success
Mar 27 21:43:28.166: INFO: Pod "pod-configmaps-6f5bd780-fdfa-4119-9a51-bcd817ca6029" satisfied condition "Succeeded or Failed"
Mar 27 21:43:28.196: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-configmaps-6f5bd780-fdfa-4119-9a51-bcd817ca6029 container configmap-volume-test: <nil>
STEP: delete the pod
Mar 27 21:43:28.272: INFO: Waiting for pod pod-configmaps-6f5bd780-fdfa-4119-9a51-bcd817ca6029 to disappear
Mar 27 21:43:28.302: INFO: Pod pod-configmaps-6f5bd780-fdfa-4119-9a51-bcd817ca6029 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 27 21:43:28.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1421" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":283,"completed":141,"skipped":2281,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Mar 27 21:43:30.675: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 27 21:43:30.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-358" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":283,"completed":142,"skipped":2299,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 27 21:43:30.848: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
Mar 27 21:43:30.981: INFO: PodSpec: initContainers in spec.initContainers
Mar 27 21:44:14.084: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9a959aa2-9d88-49c4-bcf2-d5e806c7dc7c", GenerateName:"", Namespace:"init-container-4858", SelfLink:"/api/v1/namespaces/init-container-4858/pods/pod-init-9a959aa2-9d88-49c4-bcf2-d5e806c7dc7c", UID:"51ed32ad-31ac-4863-82e5-2a5a080cc172", ResourceVersion:"14668", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720942210, loc:(*time.Location)(0x7b56f40)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"981890892"}, Annotations:map[string]string{"cni.projectcalico.org/podIP":"192.168.63.92/32"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-6qthf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005ae0180), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6qthf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6qthf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6qthf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0036367c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0009342a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003636840)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003636860)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003636868), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00363686c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720942211, loc:(*time.Location)(0x7b56f40)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720942211, loc:(*time.Location)(0x7b56f40)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720942211, loc:(*time.Location)(0x7b56f40)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720942211, loc:(*time.Location)(0x7b56f40)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.150.0.4", PodIP:"192.168.63.92", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.63.92"}}, StartTime:(*v1.Time)(0xc002dcaa60), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000934380)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0009343f0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://d63a525d1cf37907142b4c6ef3a07f4a52c3e7b1572c80c39a176035bcf4a821", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002dcaaa0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002dcaa80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00363693f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 27 21:44:14.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4858" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":283,"completed":143,"skipped":2308,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 27 21:44:14.174: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
Mar 27 21:44:14.330: INFO: Waiting up to 5m0s for pod "pod-3dbfe28d-0c08-49e3-9293-441ae21bb98e" in namespace "emptydir-5003" to be "Succeeded or Failed"
Mar 27 21:44:14.362: INFO: Pod "pod-3dbfe28d-0c08-49e3-9293-441ae21bb98e": Phase="Pending", Reason="", readiness=false. Elapsed: 31.493152ms
Mar 27 21:44:16.392: INFO: Pod "pod-3dbfe28d-0c08-49e3-9293-441ae21bb98e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061330027s
STEP: Saw pod success
Mar 27 21:44:16.392: INFO: Pod "pod-3dbfe28d-0c08-49e3-9293-441ae21bb98e" satisfied condition "Succeeded or Failed"
Mar 27 21:44:16.421: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-3dbfe28d-0c08-49e3-9293-441ae21bb98e container test-container: <nil>
STEP: delete the pod
Mar 27 21:44:16.496: INFO: Waiting for pod pod-3dbfe28d-0c08-49e3-9293-441ae21bb98e to disappear
Mar 27 21:44:16.527: INFO: Pod pod-3dbfe28d-0c08-49e3-9293-441ae21bb98e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 27 21:44:16.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5003" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":144,"skipped":2314,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] PreStop
... skipping 25 lines ...
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  test/e2e/framework/framework.go:175
Mar 27 21:44:26.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-9476" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":283,"completed":145,"skipped":2338,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 27 21:44:26.141: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar 27 21:44:26.299: INFO: Waiting up to 5m0s for pod "pod-a78204e4-7d5e-411e-a1c1-a3714cc5cbe4" in namespace "emptydir-8453" to be "Succeeded or Failed"
Mar 27 21:44:26.330: INFO: Pod "pod-a78204e4-7d5e-411e-a1c1-a3714cc5cbe4": Phase="Pending", Reason="", readiness=false. Elapsed: 31.316594ms
Mar 27 21:44:28.360: INFO: Pod "pod-a78204e4-7d5e-411e-a1c1-a3714cc5cbe4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060883058s
STEP: Saw pod success
Mar 27 21:44:28.360: INFO: Pod "pod-a78204e4-7d5e-411e-a1c1-a3714cc5cbe4" satisfied condition "Succeeded or Failed"
Mar 27 21:44:28.388: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-a78204e4-7d5e-411e-a1c1-a3714cc5cbe4 container test-container: <nil>
STEP: delete the pod
Mar 27 21:44:28.498: INFO: Waiting for pod pod-a78204e4-7d5e-411e-a1c1-a3714cc5cbe4 to disappear
Mar 27 21:44:28.530: INFO: Pod pod-a78204e4-7d5e-411e-a1c1-a3714cc5cbe4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 27 21:44:28.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8453" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":146,"skipped":2339,"failed":0}
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 27 21:44:29.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0327 21:44:29.592724   24935 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
STEP: Destroying namespace "gc-9419" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":283,"completed":147,"skipped":2340,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] LimitRange
... skipping 31 lines ...
Mar 27 21:44:37.272: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  test/e2e/framework/framework.go:175
Mar 27 21:44:37.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-5004" for this suite.
•{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":283,"completed":148,"skipped":2358,"failed":0}
SSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 74 lines ...
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-q7vq9 webserver-deployment-595b5b9587- deployment-2809 /api/v1/namespaces/deployment-2809/pods/webserver-deployment-595b5b9587-q7vq9 314101e7-1665-4ca5-85c7-0304fef7753f 15383 0 2020-03-27 21:44:44 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:192.168.63.107/32] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 21a3ab2e-66aa-4ef1-b9ac-2437ec630b1e 0xc003558d90 0xc003558d91}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-shz8x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-shz8x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-shz8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:,StartTime:2020-03-27 21:44:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 27 21:44:46.678: INFO: Pod "webserver-deployment-595b5b9587-thjv7" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-thjv7 webserver-deployment-595b5b9587- deployment-2809 /api/v1/namespaces/deployment-2809/pods/webserver-deployment-595b5b9587-thjv7 4a30dfc5-c9cc-46db-9969-b1f3e7307e44 15094 0 2020-03-27 21:44:37 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:192.168.63.97/32] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 21a3ab2e-66aa-4ef1-b9ac-2437ec630b1e 0xc003558ef0 0xc003558ef1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-shz8x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-shz8x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-shz8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:192.168.63.97,StartTime:2020-03-27 21:44:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-27 21:44:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ce2819e1abe2db8310bda3b5a6ad6078cbb4d95ec4934c7bcf575123fe2f3804,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.63.97,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 27 21:44:46.679: INFO: Pod "webserver-deployment-595b5b9587-xcvs5" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xcvs5 webserver-deployment-595b5b9587- deployment-2809 /api/v1/namespaces/deployment-2809/pods/webserver-deployment-595b5b9587-xcvs5 2d9debd1-de74-49d0-b68e-ee45c65c1317 15085 0 2020-03-27 21:44:37 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:192.168.0.37/32] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 21a3ab2e-66aa-4ef1-b9ac-2437ec630b1e 0xc003559070 0xc003559071}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-shz8x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-shz8x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-shz8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.6,PodIP:192.168.0.37,StartTime:2020-03-27 21:44:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-27 21:44:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://206f3775a7a00648c0ac99043546718e03b5b3107a82e660c199f703c8c4b350,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.37,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 27 21:44:46.679: INFO: Pod "webserver-deployment-c7997dcc8-4t24h" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4t24h webserver-deployment-c7997dcc8- deployment-2809 /api/v1/namespaces/deployment-2809/pods/webserver-deployment-c7997dcc8-4t24h bd479da0-2766-4338-a141-a7285a4c5c8f 15246 0 2020-03-27 21:44:42 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.63.101/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e46223c7-37f6-4590-ab56-d3c6da01bba5 0xc0035591d0 0xc0035591d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-shz8x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-shz8x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-shz8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:192.168.63.101,StartTime:2020-03-27 21:44:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.63.101,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 27 21:44:46.680: INFO: Pod "webserver-deployment-c7997dcc8-5r4vs" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5r4vs webserver-deployment-c7997dcc8- deployment-2809 /api/v1/namespaces/deployment-2809/pods/webserver-deployment-c7997dcc8-5r4vs 2e3c7d78-d9e6-4961-87de-1281deaefc3b 15401 0 2020-03-27 21:44:44 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.0.49/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e46223c7-37f6-4590-ab56-d3c6da01bba5 0xc003559370 0xc003559371}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-shz8x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-shz8x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-shz8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.6,PodIP:,StartTime:2020-03-27 21:44:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 27 21:44:46.680: INFO: Pod "webserver-deployment-c7997dcc8-67kld" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-67kld webserver-deployment-c7997dcc8- deployment-2809 /api/v1/namespaces/deployment-2809/pods/webserver-deployment-c7997dcc8-67kld 7efec7a5-b407-4698-8afd-e24c7efd0b33 15357 0 2020-03-27 21:44:44 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.0.44/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e46223c7-37f6-4590-ab56-d3c6da01bba5 0xc0035594e0 0xc0035594e1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-shz8x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-shz8x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-shz8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.6,PodIP:,StartTime:2020-03-27 21:44:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 27 21:44:46.680: INFO: Pod "webserver-deployment-c7997dcc8-88wq4" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-88wq4 webserver-deployment-c7997dcc8- deployment-2809 /api/v1/namespaces/deployment-2809/pods/webserver-deployment-c7997dcc8-88wq4 7d5ae873-648b-4eb6-a60f-67d39d824ad1 15377 0 2020-03-27 21:44:44 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.63.105/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e46223c7-37f6-4590-ab56-d3c6da01bba5 0xc003559640 0xc003559641}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-shz8x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-shz8x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-shz8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:,StartTime:2020-03-27 21:44:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 27 21:44:46.680: INFO: Pod "webserver-deployment-c7997dcc8-b7wjl" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-b7wjl webserver-deployment-c7997dcc8- deployment-2809 /api/v1/namespaces/deployment-2809/pods/webserver-deployment-c7997dcc8-b7wjl c7491c4a-615e-46e2-ae22-1557a7f464d6 15243 0 2020-03-27 21:44:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.63.100/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e46223c7-37f6-4590-ab56-d3c6da01bba5 0xc0035597a0 0xc0035597a1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-shz8x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-shz8x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-shz8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:192.168.63.100,StartTime:2020-03-27 21:44:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.63.100,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 27 21:44:46.680: INFO: Pod "webserver-deployment-c7997dcc8-bnm8r" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bnm8r webserver-deployment-c7997dcc8- deployment-2809 /api/v1/namespaces/deployment-2809/pods/webserver-deployment-c7997dcc8-bnm8r e0c04ea1-de1d-4186-8772-01083d4b6d29 15313 0 2020-03-27 21:44:44 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e46223c7-37f6-4590-ab56-d3c6da01bba5 0xc003559930 0xc003559931}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-shz8x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-shz8x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-shz8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 27 21:44:46.680: INFO: Pod "webserver-deployment-c7997dcc8-hvtvb" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hvtvb webserver-deployment-c7997dcc8- deployment-2809 /api/v1/namespaces/deployment-2809/pods/webserver-deployment-c7997dcc8-hvtvb 2df2ef8e-d0b9-495f-9477-3fd7591545e5 15420 0 2020-03-27 21:44:44 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e46223c7-37f6-4590-ab56-d3c6da01bba5 0xc003559a40 0xc003559a41}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-shz8x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-shz8x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-shz8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:,StartTime:2020-03-27 21:44:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 27 21:44:46.681: INFO: Pod "webserver-deployment-c7997dcc8-jsx42" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jsx42 webserver-deployment-c7997dcc8- deployment-2809 /api/v1/namespaces/deployment-2809/pods/webserver-deployment-c7997dcc8-jsx42 04f9932e-5b4c-4b76-b66b-a4461b13f425 15388 0 2020-03-27 21:44:44 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.0.46/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e46223c7-37f6-4590-ab56-d3c6da01bba5 0xc003559bb0 0xc003559bb1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-shz8x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-shz8x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-shz8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.6,PodIP:,StartTime:2020-03-27 21:44:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 27 21:44:46.681: INFO: Pod "webserver-deployment-c7997dcc8-nkcws" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nkcws webserver-deployment-c7997dcc8- deployment-2809 /api/v1/namespaces/deployment-2809/pods/webserver-deployment-c7997dcc8-nkcws c7ffc813-613f-4f7f-ba48-19eb86ea7ea5 15434 0 2020-03-27 21:44:44 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.63.111/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e46223c7-37f6-4590-ab56-d3c6da01bba5 0xc003559d10 0xc003559d11}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-shz8x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-shz8x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-shz8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:,StartTime:2020-03-27 21:44:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 27 21:44:46.681: INFO: Pod "webserver-deployment-c7997dcc8-p4w42" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-p4w42 webserver-deployment-c7997dcc8- deployment-2809 /api/v1/namespaces/deployment-2809/pods/webserver-deployment-c7997dcc8-p4w42 eeb07003-a082-4d50-a2e1-c2758ea592a6 15233 0 2020-03-27 21:44:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.0.43/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e46223c7-37f6-4590-ab56-d3c6da01bba5 0xc003559e80 0xc003559e81}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-shz8x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-shz8x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-shz8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.6,PodIP:192.168.0.43,StartTime:2020-03-27 21:44:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 27 21:44:46.681: INFO: Pod "webserver-deployment-c7997dcc8-ph9rl" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ph9rl webserver-deployment-c7997dcc8- deployment-2809 /api/v1/namespaces/deployment-2809/pods/webserver-deployment-c7997dcc8-ph9rl 3c24cc09-f490-4c29-acf7-c37454d58ca1 15236 0 2020-03-27 21:44:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.0.42/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e46223c7-37f6-4590-ab56-d3c6da01bba5 0xc0007d6040 0xc0007d6041}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-shz8x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-shz8x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-shz8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.6,PodIP:192.168.0.42,StartTime:2020-03-27 21:44:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.42,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 27 21:44:46.681: INFO: Pod "webserver-deployment-c7997dcc8-sv76z" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sv76z webserver-deployment-c7997dcc8- deployment-2809 /api/v1/namespaces/deployment-2809/pods/webserver-deployment-c7997dcc8-sv76z effc561f-d126-4b45-a055-6d540d0eee59 15435 0 2020-03-27 21:44:44 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.0.50/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e46223c7-37f6-4590-ab56-d3c6da01bba5 0xc0007d6b30 0xc0007d6b31}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-shz8x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-shz8x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-shz8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.6,PodIP:,StartTime:2020-03-27 21:44:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 27 21:44:46.682: INFO: Pod "webserver-deployment-c7997dcc8-svhnb" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-svhnb webserver-deployment-c7997dcc8- deployment-2809 /api/v1/namespaces/deployment-2809/pods/webserver-deployment-c7997dcc8-svhnb 82672877-4ffb-4cb6-a74f-b8a2b5d291c8 15240 0 2020-03-27 21:44:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.63.103/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e46223c7-37f6-4590-ab56-d3c6da01bba5 0xc0007d6ca0 0xc0007d6ca1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-shz8x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-shz8x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-shz8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 21:44:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:192.168.63.103,StartTime:2020-03-27 21:44:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.63.103,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 27 21:44:46.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2809" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":283,"completed":149,"skipped":2364,"failed":0}

------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating secret secrets-1014/secret-test-647f31de-b66f-40d3-b437-18e3dff832ca
STEP: Creating a pod to test consume secrets
Mar 27 21:44:46.940: INFO: Waiting up to 5m0s for pod "pod-configmaps-6d7474ef-8d22-4031-a10d-a2e4d79dd20d" in namespace "secrets-1014" to be "Succeeded or Failed"
Mar 27 21:44:46.969: INFO: Pod "pod-configmaps-6d7474ef-8d22-4031-a10d-a2e4d79dd20d": Phase="Pending", Reason="", readiness=false. Elapsed: 28.720434ms
Mar 27 21:44:49.001: INFO: Pod "pod-configmaps-6d7474ef-8d22-4031-a10d-a2e4d79dd20d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060797782s
Mar 27 21:44:51.030: INFO: Pod "pod-configmaps-6d7474ef-8d22-4031-a10d-a2e4d79dd20d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090317217s
STEP: Saw pod success
Mar 27 21:44:51.030: INFO: Pod "pod-configmaps-6d7474ef-8d22-4031-a10d-a2e4d79dd20d" satisfied condition "Succeeded or Failed"
Mar 27 21:44:51.059: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-configmaps-6d7474ef-8d22-4031-a10d-a2e4d79dd20d container env-test: <nil>
STEP: delete the pod
Mar 27 21:44:51.135: INFO: Waiting for pod pod-configmaps-6d7474ef-8d22-4031-a10d-a2e4d79dd20d to disappear
Mar 27 21:44:51.165: INFO: Pod pod-configmaps-6d7474ef-8d22-4031-a10d-a2e4d79dd20d no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Mar 27 21:44:51.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1014" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":283,"completed":150,"skipped":2364,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 27 21:44:51.254: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 27 21:44:51.417: INFO: Waiting up to 5m0s for pod "downward-api-6c8d36a6-efa0-4841-8836-f0a9e88acff2" in namespace "downward-api-4171" to be "Succeeded or Failed"
Mar 27 21:44:51.447: INFO: Pod "downward-api-6c8d36a6-efa0-4841-8836-f0a9e88acff2": Phase="Pending", Reason="", readiness=false. Elapsed: 29.963083ms
Mar 27 21:44:53.483: INFO: Pod "downward-api-6c8d36a6-efa0-4841-8836-f0a9e88acff2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065240597s
Mar 27 21:44:55.512: INFO: Pod "downward-api-6c8d36a6-efa0-4841-8836-f0a9e88acff2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094590827s
Mar 27 21:44:57.541: INFO: Pod "downward-api-6c8d36a6-efa0-4841-8836-f0a9e88acff2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.124048272s
STEP: Saw pod success
Mar 27 21:44:57.541: INFO: Pod "downward-api-6c8d36a6-efa0-4841-8836-f0a9e88acff2" satisfied condition "Succeeded or Failed"
Mar 27 21:44:57.572: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod downward-api-6c8d36a6-efa0-4841-8836-f0a9e88acff2 container dapi-container: <nil>
STEP: delete the pod
Mar 27 21:44:58.046: INFO: Waiting for pod downward-api-6c8d36a6-efa0-4841-8836-f0a9e88acff2 to disappear
Mar 27 21:44:58.077: INFO: Pod downward-api-6c8d36a6-efa0-4841-8836-f0a9e88acff2 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 27 21:44:58.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4171" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":283,"completed":151,"skipped":2390,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-ff7b2085-7429-46ec-b1c3-3babdd2daf43
STEP: Creating a pod to test consume configMaps
Mar 27 21:44:58.355: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0f780e0e-ce62-44f0-b106-9b3eb056a5e1" in namespace "projected-8629" to be "Succeeded or Failed"
Mar 27 21:44:58.385: INFO: Pod "pod-projected-configmaps-0f780e0e-ce62-44f0-b106-9b3eb056a5e1": Phase="Pending", Reason="", readiness=false. Elapsed: 29.3943ms
Mar 27 21:45:00.415: INFO: Pod "pod-projected-configmaps-0f780e0e-ce62-44f0-b106-9b3eb056a5e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059626197s
STEP: Saw pod success
Mar 27 21:45:00.415: INFO: Pod "pod-projected-configmaps-0f780e0e-ce62-44f0-b106-9b3eb056a5e1" satisfied condition "Succeeded or Failed"
Mar 27 21:45:00.443: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-projected-configmaps-0f780e0e-ce62-44f0-b106-9b3eb056a5e1 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 27 21:45:00.517: INFO: Waiting for pod pod-projected-configmaps-0f780e0e-ce62-44f0-b106-9b3eb056a5e1 to disappear
Mar 27 21:45:00.546: INFO: Pod pod-projected-configmaps-0f780e0e-ce62-44f0-b106-9b3eb056a5e1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 27 21:45:00.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8629" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":152,"skipped":2406,"failed":0}
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-secret-f5zj
STEP: Creating a pod to test atomic-volume-subpath
Mar 27 21:45:00.862: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-f5zj" in namespace "subpath-6487" to be "Succeeded or Failed"
Mar 27 21:45:00.890: INFO: Pod "pod-subpath-test-secret-f5zj": Phase="Pending", Reason="", readiness=false. Elapsed: 28.529019ms
Mar 27 21:45:02.920: INFO: Pod "pod-subpath-test-secret-f5zj": Phase="Running", Reason="", readiness=true. Elapsed: 2.058340089s
Mar 27 21:45:04.950: INFO: Pod "pod-subpath-test-secret-f5zj": Phase="Running", Reason="", readiness=true. Elapsed: 4.08780372s
Mar 27 21:45:06.979: INFO: Pod "pod-subpath-test-secret-f5zj": Phase="Running", Reason="", readiness=true. Elapsed: 6.11699133s
Mar 27 21:45:09.009: INFO: Pod "pod-subpath-test-secret-f5zj": Phase="Running", Reason="", readiness=true. Elapsed: 8.147149555s
Mar 27 21:45:11.039: INFO: Pod "pod-subpath-test-secret-f5zj": Phase="Running", Reason="", readiness=true. Elapsed: 10.176736855s
Mar 27 21:45:13.068: INFO: Pod "pod-subpath-test-secret-f5zj": Phase="Running", Reason="", readiness=true. Elapsed: 12.206026092s
Mar 27 21:45:15.098: INFO: Pod "pod-subpath-test-secret-f5zj": Phase="Running", Reason="", readiness=true. Elapsed: 14.235856248s
Mar 27 21:45:17.128: INFO: Pod "pod-subpath-test-secret-f5zj": Phase="Running", Reason="", readiness=true. Elapsed: 16.265830692s
Mar 27 21:45:19.157: INFO: Pod "pod-subpath-test-secret-f5zj": Phase="Running", Reason="", readiness=true. Elapsed: 18.295462099s
Mar 27 21:45:21.186: INFO: Pod "pod-subpath-test-secret-f5zj": Phase="Running", Reason="", readiness=true. Elapsed: 20.324536916s
Mar 27 21:45:23.216: INFO: Pod "pod-subpath-test-secret-f5zj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.35400255s
STEP: Saw pod success
Mar 27 21:45:23.216: INFO: Pod "pod-subpath-test-secret-f5zj" satisfied condition "Succeeded or Failed"
Mar 27 21:45:23.245: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-subpath-test-secret-f5zj container test-container-subpath-secret-f5zj: <nil>
STEP: delete the pod
Mar 27 21:45:23.321: INFO: Waiting for pod pod-subpath-test-secret-f5zj to disappear
Mar 27 21:45:23.351: INFO: Pod pod-subpath-test-secret-f5zj no longer exists
STEP: Deleting pod pod-subpath-test-secret-f5zj
Mar 27 21:45:23.351: INFO: Deleting pod "pod-subpath-test-secret-f5zj" in namespace "subpath-6487"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 27 21:45:23.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6487" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":283,"completed":153,"skipped":2408,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 58 lines ...
Mar 27 21:47:27.089: INFO: Waiting for statefulset status.replicas updated to 0
Mar 27 21:47:27.119: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 27 21:47:27.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3094" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":283,"completed":154,"skipped":2425,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 28 lines ...
Mar 27 21:48:44.378: INFO: Terminating ReplicationController wrapped-volume-race-69502f6c-6099-42ec-b1de-a81338255e3e pods took: 2.100239145s
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:175
Mar 27 21:48:59.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9781" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":283,"completed":155,"skipped":2436,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 27 21:48:59.375: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 27 21:48:59.541: INFO: Waiting up to 5m0s for pod "pod-786d7a6d-48a5-48ac-b5a9-a86514c46736" in namespace "emptydir-2673" to be "Succeeded or Failed"
Mar 27 21:48:59.575: INFO: Pod "pod-786d7a6d-48a5-48ac-b5a9-a86514c46736": Phase="Pending", Reason="", readiness=false. Elapsed: 34.62154ms
Mar 27 21:49:01.607: INFO: Pod "pod-786d7a6d-48a5-48ac-b5a9-a86514c46736": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065719729s
STEP: Saw pod success
Mar 27 21:49:01.607: INFO: Pod "pod-786d7a6d-48a5-48ac-b5a9-a86514c46736" satisfied condition "Succeeded or Failed"
Mar 27 21:49:01.636: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-786d7a6d-48a5-48ac-b5a9-a86514c46736 container test-container: <nil>
STEP: delete the pod
Mar 27 21:49:01.723: INFO: Waiting for pod pod-786d7a6d-48a5-48ac-b5a9-a86514c46736 to disappear
Mar 27 21:49:01.754: INFO: Pod pod-786d7a6d-48a5-48ac-b5a9-a86514c46736 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 27 21:49:01.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2673" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":156,"skipped":2492,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] 
  removing taint cancels eviction [Disruptive] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
... skipping 20 lines ...
STEP: Waiting some time to make sure that toleration time passed.
Mar 27 21:51:17.539: INFO: Pod wasn't evicted. Test successful
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
  test/e2e/framework/framework.go:175
Mar 27 21:51:17.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-single-pod-8936" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":283,"completed":157,"skipped":2514,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Mar 27 21:51:17.634: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override command
Mar 27 21:51:17.805: INFO: Waiting up to 5m0s for pod "client-containers-4a2dfc70-6e08-43ee-9b01-d5620929ef2d" in namespace "containers-673" to be "Succeeded or Failed"
Mar 27 21:51:17.834: INFO: Pod "client-containers-4a2dfc70-6e08-43ee-9b01-d5620929ef2d": Phase="Pending", Reason="", readiness=false. Elapsed: 28.6492ms
Mar 27 21:51:19.863: INFO: Pod "client-containers-4a2dfc70-6e08-43ee-9b01-d5620929ef2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.05825102s
STEP: Saw pod success
Mar 27 21:51:19.863: INFO: Pod "client-containers-4a2dfc70-6e08-43ee-9b01-d5620929ef2d" satisfied condition "Succeeded or Failed"
Mar 27 21:51:19.892: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod client-containers-4a2dfc70-6e08-43ee-9b01-d5620929ef2d container test-container: <nil>
STEP: delete the pod
Mar 27 21:51:19.990: INFO: Waiting for pod client-containers-4a2dfc70-6e08-43ee-9b01-d5620929ef2d to disappear
Mar 27 21:51:20.020: INFO: Pod client-containers-4a2dfc70-6e08-43ee-9b01-d5620929ef2d no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 27 21:51:20.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-673" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":283,"completed":158,"skipped":2523,"failed":0}
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-downwardapi-fg49
STEP: Creating a pod to test atomic-volume-subpath
Mar 27 21:51:20.331: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-fg49" in namespace "subpath-4439" to be "Succeeded or Failed"
Mar 27 21:51:20.361: INFO: Pod "pod-subpath-test-downwardapi-fg49": Phase="Pending", Reason="", readiness=false. Elapsed: 29.872187ms
Mar 27 21:51:22.390: INFO: Pod "pod-subpath-test-downwardapi-fg49": Phase="Running", Reason="", readiness=true. Elapsed: 2.059320048s
Mar 27 21:51:24.420: INFO: Pod "pod-subpath-test-downwardapi-fg49": Phase="Running", Reason="", readiness=true. Elapsed: 4.088786494s
Mar 27 21:51:26.449: INFO: Pod "pod-subpath-test-downwardapi-fg49": Phase="Running", Reason="", readiness=true. Elapsed: 6.118227394s
Mar 27 21:51:28.479: INFO: Pod "pod-subpath-test-downwardapi-fg49": Phase="Running", Reason="", readiness=true. Elapsed: 8.147796492s
Mar 27 21:51:30.508: INFO: Pod "pod-subpath-test-downwardapi-fg49": Phase="Running", Reason="", readiness=true. Elapsed: 10.177011567s
Mar 27 21:51:32.537: INFO: Pod "pod-subpath-test-downwardapi-fg49": Phase="Running", Reason="", readiness=true. Elapsed: 12.206283861s
Mar 27 21:51:34.566: INFO: Pod "pod-subpath-test-downwardapi-fg49": Phase="Running", Reason="", readiness=true. Elapsed: 14.235533933s
Mar 27 21:51:36.596: INFO: Pod "pod-subpath-test-downwardapi-fg49": Phase="Running", Reason="", readiness=true. Elapsed: 16.264751545s
Mar 27 21:51:38.625: INFO: Pod "pod-subpath-test-downwardapi-fg49": Phase="Running", Reason="", readiness=true. Elapsed: 18.294042112s
Mar 27 21:51:40.654: INFO: Pod "pod-subpath-test-downwardapi-fg49": Phase="Running", Reason="", readiness=true. Elapsed: 20.323631826s
Mar 27 21:51:42.689: INFO: Pod "pod-subpath-test-downwardapi-fg49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.357825932s
STEP: Saw pod success
Mar 27 21:51:42.689: INFO: Pod "pod-subpath-test-downwardapi-fg49" satisfied condition "Succeeded or Failed"
Mar 27 21:51:42.718: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-subpath-test-downwardapi-fg49 container test-container-subpath-downwardapi-fg49: <nil>
STEP: delete the pod
Mar 27 21:51:42.800: INFO: Waiting for pod pod-subpath-test-downwardapi-fg49 to disappear
Mar 27 21:51:42.828: INFO: Pod pod-subpath-test-downwardapi-fg49 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-fg49
Mar 27 21:51:42.828: INFO: Deleting pod "pod-subpath-test-downwardapi-fg49" in namespace "subpath-4439"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 27 21:51:42.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4439" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":283,"completed":159,"skipped":2529,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 7 lines ...
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 27 21:51:48.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1162" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":283,"completed":160,"skipped":2535,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Mar 27 21:51:51.329: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:51:51.361: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:51:51.593: INFO: Unable to read jessie_udp@dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:51:51.633: INFO: Unable to read jessie_tcp@dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:51:51.664: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:51:51.695: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:51:51.882: INFO: Lookups using dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84 failed for: [wheezy_udp@dns-test-service.dns-3151.svc.cluster.local wheezy_tcp@dns-test-service.dns-3151.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local jessie_udp@dns-test-service.dns-3151.svc.cluster.local jessie_tcp@dns-test-service.dns-3151.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local]

Mar 27 21:51:56.913: INFO: Unable to read wheezy_udp@dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:51:56.944: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:51:56.975: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:51:57.007: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:51:57.225: INFO: Unable to read jessie_udp@dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:51:57.255: INFO: Unable to read jessie_tcp@dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:51:57.286: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:51:57.316: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:51:57.500: INFO: Lookups using dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84 failed for: [wheezy_udp@dns-test-service.dns-3151.svc.cluster.local wheezy_tcp@dns-test-service.dns-3151.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local jessie_udp@dns-test-service.dns-3151.svc.cluster.local jessie_tcp@dns-test-service.dns-3151.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local]

Mar 27 21:52:01.914: INFO: Unable to read wheezy_udp@dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:01.945: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:01.976: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:02.007: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:02.231: INFO: Unable to read jessie_udp@dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:02.262: INFO: Unable to read jessie_tcp@dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:02.293: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:02.324: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:02.511: INFO: Lookups using dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84 failed for: [wheezy_udp@dns-test-service.dns-3151.svc.cluster.local wheezy_tcp@dns-test-service.dns-3151.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local jessie_udp@dns-test-service.dns-3151.svc.cluster.local jessie_tcp@dns-test-service.dns-3151.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local]

Mar 27 21:52:06.913: INFO: Unable to read wheezy_udp@dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:06.943: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:06.973: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:07.004: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:07.222: INFO: Unable to read jessie_udp@dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:07.253: INFO: Unable to read jessie_tcp@dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:07.283: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:07.313: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:07.502: INFO: Lookups using dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84 failed for: [wheezy_udp@dns-test-service.dns-3151.svc.cluster.local wheezy_tcp@dns-test-service.dns-3151.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local jessie_udp@dns-test-service.dns-3151.svc.cluster.local jessie_tcp@dns-test-service.dns-3151.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local]

Mar 27 21:52:11.912: INFO: Unable to read wheezy_udp@dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:11.944: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:11.974: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:12.006: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:12.228: INFO: Unable to read jessie_udp@dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:12.259: INFO: Unable to read jessie_tcp@dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:12.290: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:12.321: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:12.508: INFO: Lookups using dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84 failed for: [wheezy_udp@dns-test-service.dns-3151.svc.cluster.local wheezy_tcp@dns-test-service.dns-3151.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local jessie_udp@dns-test-service.dns-3151.svc.cluster.local jessie_tcp@dns-test-service.dns-3151.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local]

Mar 27 21:52:16.913: INFO: Unable to read wheezy_udp@dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:16.943: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:16.975: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:17.006: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:17.226: INFO: Unable to read jessie_udp@dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:17.256: INFO: Unable to read jessie_tcp@dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:17.286: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:17.316: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local from pod dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84: the server could not find the requested resource (get pods dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84)
Mar 27 21:52:17.504: INFO: Lookups using dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84 failed for: [wheezy_udp@dns-test-service.dns-3151.svc.cluster.local wheezy_tcp@dns-test-service.dns-3151.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local jessie_udp@dns-test-service.dns-3151.svc.cluster.local jessie_tcp@dns-test-service.dns-3151.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3151.svc.cluster.local]

Mar 27 21:52:22.514: INFO: DNS probes using dns-3151/dns-test-29fcf3a9-745d-4194-bf9d-b4f41433bb84 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 27 21:52:22.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3151" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":283,"completed":161,"skipped":2563,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 27 21:52:22.785: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod with failed condition
STEP: updating the pod
Mar 27 21:54:23.602: INFO: Successfully updated pod "var-expansion-ef0da330-2b21-462a-81ab-781897106add"
STEP: waiting for pod running
STEP: deleting the pod gracefully
Mar 27 21:54:25.661: INFO: Deleting pod "var-expansion-ef0da330-2b21-462a-81ab-781897106add" in namespace "var-expansion-603"
Mar 27 21:54:25.697: INFO: Wait up to 5m0s for pod "var-expansion-ef0da330-2b21-462a-81ab-781897106add" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 27 21:54:57.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-603" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":283,"completed":162,"skipped":2574,"failed":0}
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 11 lines ...
Mar 27 21:54:59.192: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 27 21:54:59.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6241" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":283,"completed":163,"skipped":2580,"failed":0}
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 129 lines ...
Mar 27 21:55:26.728: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1026/pods","resourceVersion":"19077"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 27 21:55:26.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1026" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":283,"completed":164,"skipped":2582,"failed":0}
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 19 lines ...
Mar 27 21:55:37.407: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar 27 21:55:37.437: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 27 21:55:37.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6118" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":283,"completed":165,"skipped":2587,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
Mar 27 21:55:43.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9385" for this suite.
STEP: Destroying namespace "webhook-9385-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":283,"completed":166,"skipped":2595,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-d068ccae-bf87-4610-96bf-735b091aafc8
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 27 21:57:18.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9492" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":167,"skipped":2610,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Service endpoints latency
... skipping 417 lines ...
Mar 27 21:57:29.283: INFO: 99 %ile: 945.968877ms
Mar 27 21:57:29.283: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  test/e2e/framework/framework.go:175
Mar 27 21:57:29.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-4955" for this suite.
•{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":283,"completed":168,"skipped":2620,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a volume subpath [sig-storage] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Mar 27 21:57:29.376: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [sig-storage] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in volume subpath
Mar 27 21:57:29.549: INFO: Waiting up to 5m0s for pod "var-expansion-74eed2a7-496e-4cec-8eb2-e10836853fca" in namespace "var-expansion-3440" to be "Succeeded or Failed"
Mar 27 21:57:29.580: INFO: Pod "var-expansion-74eed2a7-496e-4cec-8eb2-e10836853fca": Phase="Pending", Reason="", readiness=false. Elapsed: 30.726845ms
Mar 27 21:57:31.610: INFO: Pod "var-expansion-74eed2a7-496e-4cec-8eb2-e10836853fca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060910594s
STEP: Saw pod success
Mar 27 21:57:31.610: INFO: Pod "var-expansion-74eed2a7-496e-4cec-8eb2-e10836853fca" satisfied condition "Succeeded or Failed"
Mar 27 21:57:31.639: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod var-expansion-74eed2a7-496e-4cec-8eb2-e10836853fca container dapi-container: <nil>
STEP: delete the pod
Mar 27 21:57:31.731: INFO: Waiting for pod var-expansion-74eed2a7-496e-4cec-8eb2-e10836853fca to disappear
Mar 27 21:57:31.762: INFO: Pod var-expansion-74eed2a7-496e-4cec-8eb2-e10836853fca no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 27 21:57:31.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3440" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":283,"completed":169,"skipped":2656,"failed":0}

------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-4f32c475-360e-4b4f-9a96-90d5e3cc1d9a
STEP: Creating a pod to test consume secrets
Mar 27 21:57:32.041: INFO: Waiting up to 5m0s for pod "pod-secrets-fdaa92de-4b97-4556-a6b8-eeb8dee6ccb4" in namespace "secrets-9087" to be "Succeeded or Failed"
Mar 27 21:57:32.075: INFO: Pod "pod-secrets-fdaa92de-4b97-4556-a6b8-eeb8dee6ccb4": Phase="Pending", Reason="", readiness=false. Elapsed: 33.676709ms
Mar 27 21:57:34.104: INFO: Pod "pod-secrets-fdaa92de-4b97-4556-a6b8-eeb8dee6ccb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063047942s
STEP: Saw pod success
Mar 27 21:57:34.104: INFO: Pod "pod-secrets-fdaa92de-4b97-4556-a6b8-eeb8dee6ccb4" satisfied condition "Succeeded or Failed"
Mar 27 21:57:34.133: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-secrets-fdaa92de-4b97-4556-a6b8-eeb8dee6ccb4 container secret-volume-test: <nil>
STEP: delete the pod
Mar 27 21:57:34.211: INFO: Waiting for pod pod-secrets-fdaa92de-4b97-4556-a6b8-eeb8dee6ccb4 to disappear
Mar 27 21:57:34.241: INFO: Pod pod-secrets-fdaa92de-4b97-4556-a6b8-eeb8dee6ccb4 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 27 21:57:34.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9087" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":170,"skipped":2656,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 9 lines ...
Mar 27 21:57:34.560: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 27 21:57:34.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5623" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":283,"completed":171,"skipped":2679,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
W0327 21:57:45.150663   24935 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 27 21:57:45.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8219" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":283,"completed":172,"skipped":2692,"failed":0}

------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 29 lines ...
  test/e2e/framework/framework.go:175
Mar 27 21:57:54.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2745" for this suite.
STEP: Destroying namespace "webhook-2745-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":283,"completed":173,"skipped":2692,"failed":0}
SSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Aggregator
... skipping 15 lines ...
Mar 27 21:58:04.138: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720943075, loc:(*time.Location)(0x7b56f40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720943075, loc:(*time.Location)(0x7b56f40)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720943075, loc:(*time.Location)(0x7b56f40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720943075, loc:(*time.Location)(0x7b56f40)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-54b47bf96b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 27 21:58:06.139: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720943075, loc:(*time.Location)(0x7b56f40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720943075, loc:(*time.Location)(0x7b56f40)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720943075, loc:(*time.Location)(0x7b56f40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720943075, loc:(*time.Location)(0x7b56f40)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-54b47bf96b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 27 21:58:08.138: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720943075, loc:(*time.Location)(0x7b56f40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720943075, loc:(*time.Location)(0x7b56f40)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720943075, loc:(*time.Location)(0x7b56f40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720943075, loc:(*time.Location)(0x7b56f40)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-54b47bf96b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 27 21:59:10.424: INFO: Waited 1m0.250961417s for the sample-apiserver to be ready to handle requests.
Mar 27 21:59:10.424: INFO: current APIService: {"metadata":{"name":"v1alpha1.wardle.example.com","selfLink":"/apis/apiregistration.k8s.io/v1/apiservices/v1alpha1.wardle.example.com","uid":"e24e45cb-a256-47a2-8539-61a2019fbdfa","resourceVersion":"21256","creationTimestamp":"2020-03-27T21:58:10Z"},"spec":{"service":{"namespace":"aggregator-8212","name":"sample-api","port":7443},"group":"wardle.example.com","version":"v1alpha1","caBundle":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyRENDQWNDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFkTVJzd0dRWURWUVFERXhKbE1tVXQKYzJWeWRtVnlMV05sY25RdFkyRXdIaGNOTWpBd016STNNakUxTnpVMVdoY05NekF3TXpJMU1qRTFOelUxV2pBZApNUnN3R1FZRFZRUURFeEpsTW1VdGMyVnlkbVZ5TFdObGNuUXRZMkV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUUNZMERwQUdaS28xY1NycDVZOUhLMTZSRDZTUE9GdGZCdktJUWhaQ1p1cDZobU8KVlY2UkVKVzBJeUxJSXhaSkZEK25DdG8xaU9rNmdGTmloZlJmYjZ5WXJoSmZLU2hrMHZtdDhnQ21YZmVqbnBORgpEejcva0JLYkdrNndYY3pmMVRzQlJjZm9qWlhaby9PRlY0dHpOM256WWFmeGExZ3FDdTg2bmVqMnZRL3c0U1JPCkhXY0k2eEttdWZRTzBGMkFjM3cva09FWkZTYm1lZEZJeW1LcHppQ1lOY2tZWnc4TGRBZDBVL3YvTEl5ZE1ENGsKeWJIMjVsanBGNVgwYXZYSlV6L3lENkdaTklCZWFGZXBveU1lbUpZOUx6SllOckxEN2JIV1FZd0FUMytSRzVRNApMTFJjZnpYd3ZPeDJibDhVMlY0L0xVTUJhWFZuR3llT096NXAxOERUQWdNQkFBR2pJekFoTUE0R0ExVWREd0VCCi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFBR1ZLQlgKdUpYYzlFVU9RUk1HcEV4Q2JZakVXaWR0NFV0Y21MdkdoSmpCbHBNaEx0d1ZZZTdRWlRZbHBuY0MzMUdJcjJERwpxM3QvT0pQOWVnR1Y0S2VVV29QaG9aYitNcVpGZUJ5TEl4Tys2Mm5lMFJaWEVtTkhJWndOYmdvZ0lBa0lrRFJECmJYR0lGaGJxSGZkbEk1MnhJRHhJc2pDUzdYd3BpVk9ZQWtDMStsZWQzUlM2QnJVQTMzS3dGMStEM0l4QTJ0VzEKSStvSzRRNFlHYWt2Mlo1TkJHN0Q4czNVUmpIN0tUS0cwRnIySVcvMUpvbkc3eGd0SEtwTlB4Y1NEemtKRjEyOApydnYxWkdiZ25wL2c1aEduZnZNd0NRZUdRRFBJMjBKdUpMcklVTkpxRTJ6VjE5eWpOVncyNVBKQTErM2xXdzEwCktrdzVnci93WTdxMWtzbisKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=","groupPriorityMinimum":2000,"versionPriority":200},"status":{"conditions":[{"type":"Available","status":"False","lastTransitionTime":"2020-03-27T21:58:10Z","reason":"FailedDiscoveryCheck","message":"failing or missing response from https://10.106.141.92:7443/apis/wardle.example.com/v1alpha1: bad status from https://10.106.141.92:7443/apis/wardle.example.com/v1alpha1: 403"}]}}
Mar 27 21:59:10.424: INFO: current pods: {"metadata":{"selfLink":"/api/v1/namespaces/aggregator-8212/pods","resourceVersion":"21358"},"items":[{"metadata":{"name":"sample-apiserver-deployment-54b47bf96b-bt6sq","generateName":"sample-apiserver-deployment-54b47bf96b-","namespace":"aggregator-8212","selfLink":"/api/v1/namespaces/aggregator-8212/pods/sample-apiserver-deployment-54b47bf96b-bt6sq","uid":"bb316f0b-32e5-4b6e-965f-35fa2a2a21e1","resourceVersion":"21240","creationTimestamp":"2020-03-27T21:57:55Z","labels":{"apiserver":"true","app":"sample-apiserver","pod-template-hash":"54b47bf96b"},"annotations":{"cni.projectcalico.org/podIP":"192.168.0.19/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"sample-apiserver-deployment-54b47bf96b","uid":"e2a9bff8-50f1-45fe-b0a5-a4f68411aaf5","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"apiserver-certs","secret":{"secretName":"sample-apiserver-secret","defaultMode":420}},{"name":"default-token-9t28r","secret":{"secretName":"default-token-9t28r","defaultMode":420}}],"containers":[{"name":"sample-apiserver","image":"gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17","args":["--etcd-servers=http://127.0.0.1:2379","--tls-cert-file=/apiserver.local.config/certificates/tls.crt","--tls-private-key-file=/apiserver.local.config/certificates/tls.key","--audit-log-path=-","--audit-log-maxage=0","--audit-log-maxbackup=0"],"resources":{},"volumeMounts":[{"name":"apiserver-certs","readOnly":true,"mountPath":"/apiserver.local.config/certificates"},{"name":"default-token-9t28r","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"},{"name":"etcd","image":"k8s.gcr.io/etcd:3.4.4","command":["/usr/local/bin/etcd","--listen-client-urls","http://127.0.0.1:2379","--advertise-client-urls","http://127.0.0.1:2379"],"resources":{},"volumeMounts":[{"name":"default-token-9t28r","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-03-27T21:57:55Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-03-27T21:58:08Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-03-27T21:58:08Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-03-27T21:57:55Z"}],"hostIP":"10.150.0.6","podIP":"192.168.0.19","podIPs":[{"ip":"192.168.0.19"}],"startTime":"2020-03-27T21:57:55Z","containerStatuses":[{"name":"etcd","state":{"running":{"startedAt":"2020-03-27T21:58:08Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/etcd:3.4.4","imageID":"k8s.gcr.io/etcd@sha256:e10ee22e7b56d08b7cb7da2a390863c445d66a7284294cee8c9decbfb3ba4359","containerID":"containerd://045536ba7bad62570bc1d599b92bb831c0c540d16dcca4119f3fd2f8a5ed57e5","started":true},{"name":"sample-apiserver","state":{"running":{"startedAt":"2020-03-27T21:57:58Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17","imageID":"gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55","containerID":"containerd://e16789cdf77224b33d433a836de2b0f4cd8ca37ebb1fd07a0ea0003dec977627","started":true}],"qosClass":"BestEffort"}}]}
Mar 27 21:59:10.517: INFO: logs of sample-apiserver-deployment-54b47bf96b-bt6sq/sample-apiserver (error: <nil>): W0327 21:57:59.373192       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0327 21:57:59.373280       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
I0327 21:57:59.393709       1 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook,BanFlunder.
I0327 21:57:59.393755       1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
I0327 21:57:59.399883       1 client.go:361] parsed scheme: "endpoint"
I0327 21:57:59.399931       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0327 21:57:59.400298       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0327 21:58:00.031732       1 client.go:361] parsed scheme: "endpoint"
I0327 21:58:00.031847       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0327 21:58:00.032209       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0327 21:58:00.404136       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0327 21:58:01.032839       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0327 21:58:02.021412       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0327 21:58:02.616042       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0327 21:58:04.083538       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0327 21:58:04.732134       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0327 21:58:07.643749       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0327 21:58:14.421650       1 client.go:361] parsed scheme: "endpoint"
I0327 21:58:14.421789       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0327 21:58:14.423233       1 client.go:361] parsed scheme: "endpoint"
I0327 21:58:14.423262       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0327 21:58:14.425589       1 client.go:361] parsed scheme: "endpoint"
I0327 21:58:14.425746       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0327 21:58:14.475206       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0327 21:58:14.475297       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0327 21:58:14.475351       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0327 21:58:14.475375       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0327 21:58:14.476121       1 secure_serving.go:178] Serving securely on [::]:443
I0327 21:58:14.476469       1 dynamic_serving_content.go:129] Starting serving-cert::/apiserver.local.config/certificates/tls.crt::/apiserver.local.config/certificates/tls.key
I0327 21:58:14.476600       1 tlsconfig.go:219] Starting DynamicServingCertificateController
E0327 21:58:14.480098       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:14.480914       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:15.482070       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:15.487998       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:16.483949       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:16.490239       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:17.485978       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:17.491909       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:18.487971       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:18.493770       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:19.489956       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:19.495486       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:20.491945       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:20.497281       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:21.493891       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:21.498964       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:22.496050       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:22.500951       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:23.498469       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:23.503051       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:24.500885       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:24.504932       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:25.503398       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:25.506613       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:26.505986       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:26.508150       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:27.508040       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:27.509908       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:28.510109       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:28.511459       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:29.512070       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:29.514139       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:30.514012       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:30.516860       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:31.516111       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:31.519237       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:32.520006       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:32.521007       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:33.522098       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:33.523880       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:34.524465       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:34.526576       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:35.526813       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:35.528210       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:36.528717       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:36.531215       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:37.530628       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:37.532903       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:38.532376       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:38.535393       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:39.534316       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:39.536861       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:40.536380       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:40.538272       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:41.538912       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:41.539841       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:42.540790       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:42.542718       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:43.542907       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:43.544270       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:44.545370       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:44.546976       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:45.547313       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:45.548358       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:46.549457       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:46.550076       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:47.551389       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:47.553073       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:48.553339       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:48.555667       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:49.555463       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:49.558285       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:50.557339       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:50.559888       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:51.559243       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:51.562589       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:52.561197       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:52.564050       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:53.563419       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:53.565594       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:54.566164       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:54.567025       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:55.568108       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:55.568697       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:56.570310       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:56.570741       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:57.572296       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:57.572767       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:58.574294       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:58.575851       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:59.576387       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:58:59.577159       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:59:00.578342       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:59:00.579025       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:59:01.580180       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:59:01.582011       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:59:02.582221       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:59:02.583608       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:59:03.584513       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:59:03.584994       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:59:04.587047       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:59:04.587047       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:59:05.589021       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:59:05.590964       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:59:06.590932       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:59:06.592421       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:59:07.592771       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:59:07.595081       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:59:08.594944       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:59:08.597857       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:59:09.596762       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0327 21:59:09.599439       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-8212:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"

Mar 27 21:59:10.550: INFO: logs of sample-apiserver-deployment-54b47bf96b-bt6sq/etcd (error: <nil>): [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-03-27 21:58:08.478028 I | etcdmain: etcd Version: 3.4.4
2020-03-27 21:58:08.478068 I | etcdmain: Git SHA: c65a9e2dd
2020-03-27 21:58:08.478073 I | etcdmain: Go Version: go1.12.12
2020-03-27 21:58:08.478079 I | etcdmain: Go OS/Arch: linux/amd64
2020-03-27 21:58:08.478085 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2020-03-27 21:58:08.478094 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
... skipping 26 lines ...
2020-03-27 21:58:09.495880 I | etcdserver: setting up the initial cluster version to 3.4
2020-03-27 21:58:09.496001 I | embed: ready to serve client requests
2020-03-27 21:58:09.497274 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
2020-03-27 21:58:09.498226 N | etcdserver/membership: set the initial cluster version to 3.4
2020-03-27 21:58:09.498527 I | etcdserver/api: enabled capabilities for version 3.4

Mar 27 21:59:10.550: FAIL: gave up waiting for apiservice wardle to come up successfully
Unexpected error:
    <*errors.errorString | 0xc0001ba000>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 153 lines ...
[sig-api-machinery] Aggregator
test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] [It]
  test/e2e/framework/framework.go:597

  Mar 27 21:59:10.550: gave up waiting for apiservice wardle to come up successfully
  Unexpected error:
      <*errors.errorString | 0xc0001ba000>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  test/e2e/apimachinery/aggregator.go:401
------------------------------
{"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":283,"completed":173,"skipped":2695,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 27 21:59:12.940: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8990e11-af22-467f-8000-00233086882c" in namespace "projected-7592" to be "Succeeded or Failed"
Mar 27 21:59:12.971: INFO: Pod "downwardapi-volume-e8990e11-af22-467f-8000-00233086882c": Phase="Pending", Reason="", readiness=false. Elapsed: 30.625197ms
Mar 27 21:59:15.001: INFO: Pod "downwardapi-volume-e8990e11-af22-467f-8000-00233086882c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060360264s
STEP: Saw pod success
Mar 27 21:59:15.001: INFO: Pod "downwardapi-volume-e8990e11-af22-467f-8000-00233086882c" satisfied condition "Succeeded or Failed"
Mar 27 21:59:15.031: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod downwardapi-volume-e8990e11-af22-467f-8000-00233086882c container client-container: <nil>
STEP: delete the pod
Mar 27 21:59:15.113: INFO: Waiting for pod downwardapi-volume-e8990e11-af22-467f-8000-00233086882c to disappear
Mar 27 21:59:15.142: INFO: Pod downwardapi-volume-e8990e11-af22-467f-8000-00233086882c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 27 21:59:15.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7592" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":283,"completed":174,"skipped":2725,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
Mar 27 21:59:18.121: INFO: Unable to read jessie_udp@dns-test-service.dns-5602 from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:18.151: INFO: Unable to read jessie_tcp@dns-test-service.dns-5602 from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:18.182: INFO: Unable to read jessie_udp@dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:18.212: INFO: Unable to read jessie_tcp@dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:18.244: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:18.275: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:18.460: INFO: Lookups using dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5602 wheezy_tcp@dns-test-service.dns-5602 wheezy_udp@dns-test-service.dns-5602.svc wheezy_tcp@dns-test-service.dns-5602.svc wheezy_udp@_http._tcp.dns-test-service.dns-5602.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5602.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5602 jessie_tcp@dns-test-service.dns-5602 jessie_udp@dns-test-service.dns-5602.svc jessie_tcp@dns-test-service.dns-5602.svc jessie_udp@_http._tcp.dns-test-service.dns-5602.svc jessie_tcp@_http._tcp.dns-test-service.dns-5602.svc]

Mar 27 21:59:23.492: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:23.525: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:23.556: INFO: Unable to read wheezy_udp@dns-test-service.dns-5602 from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:23.587: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5602 from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:23.618: INFO: Unable to read wheezy_udp@dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
... skipping 5 lines ...
Mar 27 21:59:24.002: INFO: Unable to read jessie_udp@dns-test-service.dns-5602 from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:24.034: INFO: Unable to read jessie_tcp@dns-test-service.dns-5602 from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:24.065: INFO: Unable to read jessie_udp@dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:24.096: INFO: Unable to read jessie_tcp@dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:24.126: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:24.156: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:24.342: INFO: Lookups using dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5602 wheezy_tcp@dns-test-service.dns-5602 wheezy_udp@dns-test-service.dns-5602.svc wheezy_tcp@dns-test-service.dns-5602.svc wheezy_udp@_http._tcp.dns-test-service.dns-5602.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5602.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5602 jessie_tcp@dns-test-service.dns-5602 jessie_udp@dns-test-service.dns-5602.svc jessie_tcp@dns-test-service.dns-5602.svc jessie_udp@_http._tcp.dns-test-service.dns-5602.svc jessie_tcp@_http._tcp.dns-test-service.dns-5602.svc]

Mar 27 21:59:28.491: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:28.523: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:28.555: INFO: Unable to read wheezy_udp@dns-test-service.dns-5602 from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:28.587: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5602 from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:28.619: INFO: Unable to read wheezy_udp@dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
... skipping 5 lines ...
Mar 27 21:59:28.992: INFO: Unable to read jessie_udp@dns-test-service.dns-5602 from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:29.026: INFO: Unable to read jessie_tcp@dns-test-service.dns-5602 from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:29.058: INFO: Unable to read jessie_udp@dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:29.088: INFO: Unable to read jessie_tcp@dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:29.119: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:29.150: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:29.335: INFO: Lookups using dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5602 wheezy_tcp@dns-test-service.dns-5602 wheezy_udp@dns-test-service.dns-5602.svc wheezy_tcp@dns-test-service.dns-5602.svc wheezy_udp@_http._tcp.dns-test-service.dns-5602.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5602.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5602 jessie_tcp@dns-test-service.dns-5602 jessie_udp@dns-test-service.dns-5602.svc jessie_tcp@dns-test-service.dns-5602.svc jessie_udp@_http._tcp.dns-test-service.dns-5602.svc jessie_tcp@_http._tcp.dns-test-service.dns-5602.svc]

Mar 27 21:59:33.492: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:33.524: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:33.556: INFO: Unable to read wheezy_udp@dns-test-service.dns-5602 from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:33.587: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5602 from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:33.618: INFO: Unable to read wheezy_udp@dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
... skipping 5 lines ...
Mar 27 21:59:33.992: INFO: Unable to read jessie_udp@dns-test-service.dns-5602 from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:34.023: INFO: Unable to read jessie_tcp@dns-test-service.dns-5602 from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:34.054: INFO: Unable to read jessie_udp@dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:34.086: INFO: Unable to read jessie_tcp@dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:34.115: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:34.146: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:34.333: INFO: Lookups using dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5602 wheezy_tcp@dns-test-service.dns-5602 wheezy_udp@dns-test-service.dns-5602.svc wheezy_tcp@dns-test-service.dns-5602.svc wheezy_udp@_http._tcp.dns-test-service.dns-5602.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5602.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5602 jessie_tcp@dns-test-service.dns-5602 jessie_udp@dns-test-service.dns-5602.svc jessie_tcp@dns-test-service.dns-5602.svc jessie_udp@_http._tcp.dns-test-service.dns-5602.svc jessie_tcp@_http._tcp.dns-test-service.dns-5602.svc]

Mar 27 21:59:38.491: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:38.522: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:38.554: INFO: Unable to read wheezy_udp@dns-test-service.dns-5602 from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:38.585: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5602 from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:38.616: INFO: Unable to read wheezy_udp@dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
... skipping 5 lines ...
Mar 27 21:59:38.985: INFO: Unable to read jessie_udp@dns-test-service.dns-5602 from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:39.016: INFO: Unable to read jessie_tcp@dns-test-service.dns-5602 from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:39.046: INFO: Unable to read jessie_udp@dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:39.077: INFO: Unable to read jessie_tcp@dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:39.108: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:39.139: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:39.326: INFO: Lookups using dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5602 wheezy_tcp@dns-test-service.dns-5602 wheezy_udp@dns-test-service.dns-5602.svc wheezy_tcp@dns-test-service.dns-5602.svc wheezy_udp@_http._tcp.dns-test-service.dns-5602.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5602.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5602 jessie_tcp@dns-test-service.dns-5602 jessie_udp@dns-test-service.dns-5602.svc jessie_tcp@dns-test-service.dns-5602.svc jessie_udp@_http._tcp.dns-test-service.dns-5602.svc jessie_tcp@_http._tcp.dns-test-service.dns-5602.svc]

Mar 27 21:59:43.492: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:43.528: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:43.558: INFO: Unable to read wheezy_udp@dns-test-service.dns-5602 from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:43.589: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5602 from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:43.626: INFO: Unable to read wheezy_udp@dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
... skipping 5 lines ...
Mar 27 21:59:43.999: INFO: Unable to read jessie_udp@dns-test-service.dns-5602 from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:44.030: INFO: Unable to read jessie_tcp@dns-test-service.dns-5602 from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:44.061: INFO: Unable to read jessie_udp@dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:44.092: INFO: Unable to read jessie_tcp@dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:44.123: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:44.153: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5602.svc from pod dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51: the server could not find the requested resource (get pods dns-test-cec01937-0e02-4344-b452-81c90e393f51)
Mar 27 21:59:44.338: INFO: Lookups using dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5602 wheezy_tcp@dns-test-service.dns-5602 wheezy_udp@dns-test-service.dns-5602.svc wheezy_tcp@dns-test-service.dns-5602.svc wheezy_udp@_http._tcp.dns-test-service.dns-5602.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5602.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5602 jessie_tcp@dns-test-service.dns-5602 jessie_udp@dns-test-service.dns-5602.svc jessie_tcp@dns-test-service.dns-5602.svc jessie_udp@_http._tcp.dns-test-service.dns-5602.svc jessie_tcp@_http._tcp.dns-test-service.dns-5602.svc]

Mar 27 21:59:49.327: INFO: DNS probes using dns-5602/dns-test-cec01937-0e02-4344-b452-81c90e393f51 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 27 21:59:49.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5602" for this suite.
•{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":283,"completed":175,"skipped":2730,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Mar 27 21:59:49.584: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override all
Mar 27 21:59:49.746: INFO: Waiting up to 5m0s for pod "client-containers-c4446187-baf0-48a6-9fb2-e702aeb95413" in namespace "containers-7019" to be "Succeeded or Failed"
Mar 27 21:59:49.774: INFO: Pod "client-containers-c4446187-baf0-48a6-9fb2-e702aeb95413": Phase="Pending", Reason="", readiness=false. Elapsed: 28.245129ms
Mar 27 21:59:51.803: INFO: Pod "client-containers-c4446187-baf0-48a6-9fb2-e702aeb95413": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.05748855s
STEP: Saw pod success
Mar 27 21:59:51.803: INFO: Pod "client-containers-c4446187-baf0-48a6-9fb2-e702aeb95413" satisfied condition "Succeeded or Failed"
Mar 27 21:59:51.832: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod client-containers-c4446187-baf0-48a6-9fb2-e702aeb95413 container test-container: <nil>
STEP: delete the pod
Mar 27 21:59:51.909: INFO: Waiting for pod client-containers-c4446187-baf0-48a6-9fb2-e702aeb95413 to disappear
Mar 27 21:59:51.940: INFO: Pod client-containers-c4446187-baf0-48a6-9fb2-e702aeb95413 no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 27 21:59:51.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7019" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":283,"completed":176,"skipped":2740,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 27 21:59:56.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-5310" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":283,"completed":177,"skipped":2764,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 9 lines ...
STEP: Updating configmap projected-configmap-test-upd-bdca62e4-321b-4b52-9b84-2d3e42796edf
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 27 22:01:25.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1023" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":178,"skipped":2787,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-72abdd89-b085-427f-9741-cd37e1d1e6f6
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 27 22:01:29.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-695" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":179,"skipped":2817,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 20 lines ...
Mar 27 22:01:34.518: INFO: Pod "test-cleanup-deployment-577c77b589-s429r" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-s429r test-cleanup-deployment-577c77b589- deployment-2165 /api/v1/namespaces/deployment-2165/pods/test-cleanup-deployment-577c77b589-s429r b15eaf12-43e5-43a0-95c1-ca1b0ed851c4 21963 0 2020-03-27 22:01:32 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:577c77b589] map[cni.projectcalico.org/podIP:192.168.63.80/32] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 3366c3dc-8d68-41da-82b3-810e6134fe97 0xc0021605a7 0xc0021605a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7bv96,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7bv96,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7bv96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 22:01:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 22:01:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 22:01:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 22:01:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:192.168.63.80,StartTime:2020-03-27 22:01:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-27 22:01:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://154e2723bb0fc1e582e08326d356b580dc74b23753e210ed6661c4777f07bcfb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.63.80,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 27 22:01:34.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2165" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":283,"completed":180,"skipped":2861,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Events
... skipping 16 lines ...
Mar 27 22:01:40.953: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  test/e2e/framework/framework.go:175
Mar 27 22:01:40.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-264" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":283,"completed":181,"skipped":2877,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] HostPath
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test hostPath mode
Mar 27 22:01:41.237: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6157" to be "Succeeded or Failed"
Mar 27 22:01:41.268: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 30.363507ms
Mar 27 22:01:43.299: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061371316s
STEP: Saw pod success
Mar 27 22:01:43.299: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Mar 27 22:01:43.329: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Mar 27 22:01:43.404: INFO: Waiting for pod pod-host-path-test to disappear
Mar 27 22:01:43.433: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  test/e2e/framework/framework.go:175
Mar 27 22:01:43.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-6157" for this suite.
•{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":182,"skipped":2937,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 27 22:01:43.695: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51cc942c-8db6-493c-8ca4-6073b6cc8131" in namespace "downward-api-8317" to be "Succeeded or Failed"
Mar 27 22:01:43.724: INFO: Pod "downwardapi-volume-51cc942c-8db6-493c-8ca4-6073b6cc8131": Phase="Pending", Reason="", readiness=false. Elapsed: 29.594049ms
Mar 27 22:01:45.754: INFO: Pod "downwardapi-volume-51cc942c-8db6-493c-8ca4-6073b6cc8131": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058783823s
STEP: Saw pod success
Mar 27 22:01:45.754: INFO: Pod "downwardapi-volume-51cc942c-8db6-493c-8ca4-6073b6cc8131" satisfied condition "Succeeded or Failed"
Mar 27 22:01:45.783: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod downwardapi-volume-51cc942c-8db6-493c-8ca4-6073b6cc8131 container client-container: <nil>
STEP: delete the pod
Mar 27 22:01:45.861: INFO: Waiting for pod downwardapi-volume-51cc942c-8db6-493c-8ca4-6073b6cc8131 to disappear
Mar 27 22:01:45.890: INFO: Pod downwardapi-volume-51cc942c-8db6-493c-8ca4-6073b6cc8131 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 27 22:01:45.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8317" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":183,"skipped":2942,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 27 22:01:46.109: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 27 22:01:46.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5507" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":283,"completed":184,"skipped":2972,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 3 lines ...
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  test/e2e/common/pods.go:180
[It] should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 27 22:01:49.210: INFO: Waiting up to 5m0s for pod "client-envvars-a9c1bf2f-66a3-4236-b4c6-a2a56d4fbae0" in namespace "pods-6058" to be "Succeeded or Failed"
Mar 27 22:01:49.239: INFO: Pod "client-envvars-a9c1bf2f-66a3-4236-b4c6-a2a56d4fbae0": Phase="Pending", Reason="", readiness=false. Elapsed: 29.026771ms
Mar 27 22:01:51.268: INFO: Pod "client-envvars-a9c1bf2f-66a3-4236-b4c6-a2a56d4fbae0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058130591s
STEP: Saw pod success
Mar 27 22:01:51.268: INFO: Pod "client-envvars-a9c1bf2f-66a3-4236-b4c6-a2a56d4fbae0" satisfied condition "Succeeded or Failed"
Mar 27 22:01:51.297: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod client-envvars-a9c1bf2f-66a3-4236-b4c6-a2a56d4fbae0 container env3cont: <nil>
STEP: delete the pod
Mar 27 22:01:51.372: INFO: Waiting for pod client-envvars-a9c1bf2f-66a3-4236-b4c6-a2a56d4fbae0 to disappear
Mar 27 22:01:51.402: INFO: Pod client-envvars-a9c1bf2f-66a3-4236-b4c6-a2a56d4fbae0 no longer exists
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 27 22:01:51.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6058" for this suite.
•{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":283,"completed":185,"skipped":2972,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
... skipping 42 lines ...
Mar 27 22:01:58.179: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 27 22:01:58.437: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  test/e2e/framework/framework.go:175
Mar 27 22:01:58.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-6602" for this suite.
•{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":186,"skipped":3000,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Mar 27 22:01:58.527: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test env composition
Mar 27 22:01:58.697: INFO: Waiting up to 5m0s for pod "var-expansion-ae0b84f9-bef5-4aff-8a2f-1639b15c8c81" in namespace "var-expansion-4759" to be "Succeeded or Failed"
Mar 27 22:01:58.726: INFO: Pod "var-expansion-ae0b84f9-bef5-4aff-8a2f-1639b15c8c81": Phase="Pending", Reason="", readiness=false. Elapsed: 29.700157ms
Mar 27 22:02:00.756: INFO: Pod "var-expansion-ae0b84f9-bef5-4aff-8a2f-1639b15c8c81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059074149s
STEP: Saw pod success
Mar 27 22:02:00.756: INFO: Pod "var-expansion-ae0b84f9-bef5-4aff-8a2f-1639b15c8c81" satisfied condition "Succeeded or Failed"
Mar 27 22:02:00.784: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod var-expansion-ae0b84f9-bef5-4aff-8a2f-1639b15c8c81 container dapi-container: <nil>
STEP: delete the pod
Mar 27 22:02:00.861: INFO: Waiting for pod var-expansion-ae0b84f9-bef5-4aff-8a2f-1639b15c8c81 to disappear
Mar 27 22:02:00.891: INFO: Pod var-expansion-ae0b84f9-bef5-4aff-8a2f-1639b15c8c81 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 27 22:02:00.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4759" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":283,"completed":187,"skipped":3004,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 46 lines ...
Mar 27 22:02:16.686: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6263/pods","resourceVersion":"22437"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 27 22:02:16.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6263" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":283,"completed":188,"skipped":3004,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 27 22:02:16.912: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 27 22:02:17.076: INFO: Waiting up to 5m0s for pod "downward-api-62f30c56-f1ea-4991-b99b-da85559bbc8b" in namespace "downward-api-2464" to be "Succeeded or Failed"
Mar 27 22:02:17.105: INFO: Pod "downward-api-62f30c56-f1ea-4991-b99b-da85559bbc8b": Phase="Pending", Reason="", readiness=false. Elapsed: 28.550082ms
Mar 27 22:02:19.134: INFO: Pod "downward-api-62f30c56-f1ea-4991-b99b-da85559bbc8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058081483s
STEP: Saw pod success
Mar 27 22:02:19.134: INFO: Pod "downward-api-62f30c56-f1ea-4991-b99b-da85559bbc8b" satisfied condition "Succeeded or Failed"
Mar 27 22:02:19.164: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod downward-api-62f30c56-f1ea-4991-b99b-da85559bbc8b container dapi-container: <nil>
STEP: delete the pod
Mar 27 22:02:19.240: INFO: Waiting for pod downward-api-62f30c56-f1ea-4991-b99b-da85559bbc8b to disappear
Mar 27 22:02:19.271: INFO: Pod downward-api-62f30c56-f1ea-4991-b99b-da85559bbc8b no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 27 22:02:19.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2464" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":283,"completed":189,"skipped":3019,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 8 lines ...
Mar 27 22:02:19.649: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"a5dd4ad7-01db-4773-9775-f54b47be1b01", Controller:(*bool)(0xc0019e66a6), BlockOwnerDeletion:(*bool)(0xc0019e66a7)}}
Mar 27 22:02:19.682: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"055641ee-7523-4f03-b609-890e35edea29", Controller:(*bool)(0xc0036e0826), BlockOwnerDeletion:(*bool)(0xc0036e0827)}}
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 27 22:02:24.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-296" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":283,"completed":190,"skipped":3029,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
... skipping 18 lines ...
STEP: Deleting second CR
Mar 27 22:03:15.465: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-27T22:02:35Z generation:2 name:name2 resourceVersion:22688 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:3af9874e-0a1b-4e64-b647-c274a9fd0fad] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 27 22:03:25.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-8255" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":283,"completed":191,"skipped":3050,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 27 22:03:25.802: INFO: Waiting up to 5m0s for pod "downwardapi-volume-84068ae7-3701-452c-a138-95bbe4fd3968" in namespace "projected-6980" to be "Succeeded or Failed"
Mar 27 22:03:25.831: INFO: Pod "downwardapi-volume-84068ae7-3701-452c-a138-95bbe4fd3968": Phase="Pending", Reason="", readiness=false. Elapsed: 29.420739ms
Mar 27 22:03:27.863: INFO: Pod "downwardapi-volume-84068ae7-3701-452c-a138-95bbe4fd3968": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061326451s
STEP: Saw pod success
Mar 27 22:03:27.863: INFO: Pod "downwardapi-volume-84068ae7-3701-452c-a138-95bbe4fd3968" satisfied condition "Succeeded or Failed"
Mar 27 22:03:27.892: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod downwardapi-volume-84068ae7-3701-452c-a138-95bbe4fd3968 container client-container: <nil>
STEP: delete the pod
Mar 27 22:03:27.973: INFO: Waiting for pod downwardapi-volume-84068ae7-3701-452c-a138-95bbe4fd3968 to disappear
Mar 27 22:03:28.001: INFO: Pod downwardapi-volume-84068ae7-3701-452c-a138-95bbe4fd3968 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 27 22:03:28.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6980" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":192,"skipped":3092,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 190 lines ...
Mar 27 22:03:42.550: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 27 22:03:42.550: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 27 22:03:42.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1535" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":283,"completed":193,"skipped":3107,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 27 lines ...
Mar 27 22:04:04.570: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 27 22:04:05.815: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 27 22:04:05.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-677" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":194,"skipped":3112,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
Mar 27 22:04:08.785: INFO: Successfully updated pod "labelsupdatecdb935e6-6e4c-45f6-a817-b7839c8b208d"
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 27 22:04:12.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5775" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":283,"completed":195,"skipped":3127,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Mar 27 22:04:15.275: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 27 22:04:15.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8840" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":283,"completed":196,"skipped":3133,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Mar 27 22:04:21.248: INFO: stderr: ""
Mar 27 22:04:21.248: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3849-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 27 22:04:24.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1247" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":283,"completed":197,"skipped":3155,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 12 lines ...
Mar 27 22:04:27.211: INFO: Terminating Job.batch foo pods took: 300.467209ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Mar 27 22:05:06.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9128" for this suite.
•{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":283,"completed":198,"skipped":3161,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...
Mar 27 22:05:09.214: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 27 22:05:09.477: INFO: Deleting pod dns-6698...
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 27 22:05:09.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6698" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":283,"completed":199,"skipped":3174,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 60 lines ...
Mar 27 22:05:24.706: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7911/pods","resourceVersion":"23574"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 27 22:05:24.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7911" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":283,"completed":200,"skipped":3217,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 12 lines ...
Mar 27 22:05:27.204: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 27 22:05:27.470: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 27 22:05:27.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5347" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":283,"completed":201,"skipped":3252,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
Mar 27 22:05:33.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7102" for this suite.
STEP: Destroying namespace "webhook-7102-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":283,"completed":202,"skipped":3261,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 16 lines ...
  test/e2e/framework/framework.go:175
Mar 27 22:05:47.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-3569" for this suite.
STEP: Destroying namespace "nsdeletetest-722" for this suite.
Mar 27 22:05:47.622: INFO: Namespace nsdeletetest-722 was already deleted
STEP: Destroying namespace "nsdeletetest-2340" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":283,"completed":203,"skipped":3287,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Mar 27 22:05:52.551: INFO: stderr: ""
Mar 27 22:05:52.551: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5591-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t<string>\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t<string>\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t<Object>\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t<map[string]>\n     Specification of Waldo\n\n   status\t<Object>\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 27 22:05:55.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9354" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":283,"completed":204,"skipped":3306,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 11 lines ...
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Mar 27 22:05:58.797: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a30caa56-8355-40be-869b-34c799db71e6"
Mar 27 22:05:58.797: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a30caa56-8355-40be-869b-34c799db71e6" in namespace "pods-5449" to be "terminated due to deadline exceeded"
Mar 27 22:05:58.826: INFO: Pod "pod-update-activedeadlineseconds-a30caa56-8355-40be-869b-34c799db71e6": Phase="Running", Reason="", readiness=true. Elapsed: 28.969056ms
Mar 27 22:06:00.855: INFO: Pod "pod-update-activedeadlineseconds-a30caa56-8355-40be-869b-34c799db71e6": Phase="Running", Reason="", readiness=true. Elapsed: 2.058807443s
Mar 27 22:06:02.885: INFO: Pod "pod-update-activedeadlineseconds-a30caa56-8355-40be-869b-34c799db71e6": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.088527044s
Mar 27 22:06:02.885: INFO: Pod "pod-update-activedeadlineseconds-a30caa56-8355-40be-869b-34c799db71e6" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 27 22:06:02.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5449" for this suite.
•{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":283,"completed":205,"skipped":3311,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 27 22:06:02.973: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on node default medium
Mar 27 22:06:03.141: INFO: Waiting up to 5m0s for pod "pod-97467b83-6f85-4926-88a0-2684fd48b988" in namespace "emptydir-6801" to be "Succeeded or Failed"
Mar 27 22:06:03.177: INFO: Pod "pod-97467b83-6f85-4926-88a0-2684fd48b988": Phase="Pending", Reason="", readiness=false. Elapsed: 36.330861ms
Mar 27 22:06:05.207: INFO: Pod "pod-97467b83-6f85-4926-88a0-2684fd48b988": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06643147s
STEP: Saw pod success
Mar 27 22:06:05.207: INFO: Pod "pod-97467b83-6f85-4926-88a0-2684fd48b988" satisfied condition "Succeeded or Failed"
Mar 27 22:06:05.236: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-97467b83-6f85-4926-88a0-2684fd48b988 container test-container: <nil>
STEP: delete the pod
Mar 27 22:06:05.319: INFO: Waiting for pod pod-97467b83-6f85-4926-88a0-2684fd48b988 to disappear
Mar 27 22:06:05.349: INFO: Pod pod-97467b83-6f85-4926-88a0-2684fd48b988 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 27 22:06:05.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6801" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":206,"skipped":3323,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 19 lines ...
Mar 27 22:06:16.566: INFO: stderr: ""
Mar 27 22:06:16.566: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 27 22:06:16.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1035" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":283,"completed":207,"skipped":3368,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 2 lines ...
Mar 27 22:06:16.660: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Mar 27 22:06:18.948: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 27 22:06:19.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-269" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":283,"completed":208,"skipped":3395,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 27 22:06:35.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2369" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":283,"completed":209,"skipped":3425,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Mar 27 22:06:35.818: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override arguments
Mar 27 22:06:35.980: INFO: Waiting up to 5m0s for pod "client-containers-06ff931c-f614-4883-802a-525601ed63f8" in namespace "containers-967" to be "Succeeded or Failed"
Mar 27 22:06:36.009: INFO: Pod "client-containers-06ff931c-f614-4883-802a-525601ed63f8": Phase="Pending", Reason="", readiness=false. Elapsed: 29.56207ms
Mar 27 22:06:38.039: INFO: Pod "client-containers-06ff931c-f614-4883-802a-525601ed63f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.05942438s
STEP: Saw pod success
Mar 27 22:06:38.039: INFO: Pod "client-containers-06ff931c-f614-4883-802a-525601ed63f8" satisfied condition "Succeeded or Failed"
Mar 27 22:06:38.069: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod client-containers-06ff931c-f614-4883-802a-525601ed63f8 container test-container: <nil>
STEP: delete the pod
Mar 27 22:06:38.157: INFO: Waiting for pod client-containers-06ff931c-f614-4883-802a-525601ed63f8 to disappear
Mar 27 22:06:38.185: INFO: Pod client-containers-06ff931c-f614-4883-802a-525601ed63f8 no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 27 22:06:38.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-967" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":283,"completed":210,"skipped":3437,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 27 22:06:40.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4107" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":211,"skipped":3447,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] 
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
... skipping 19 lines ...
Mar 27 22:08:16.518: INFO: Noticed Pod "taint-eviction-b2" gets evicted.
STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
  test/e2e/framework/framework.go:175
Mar 27 22:08:16.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-multiple-pods-7474" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":283,"completed":212,"skipped":3463,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 47 lines ...
Mar 27 22:08:20.875: INFO: stderr: ""
Mar 27 22:08:20.875: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 27 22:08:20.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4874" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":283,"completed":213,"skipped":3478,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-7a20d011-530a-4608-a618-9ba4111d63bd
STEP: Creating a pod to test consume configMaps
Mar 27 22:08:21.176: INFO: Waiting up to 5m0s for pod "pod-configmaps-8776202e-9b4e-4429-b2b8-1d10be679cdd" in namespace "configmap-8189" to be "Succeeded or Failed"
Mar 27 22:08:21.208: INFO: Pod "pod-configmaps-8776202e-9b4e-4429-b2b8-1d10be679cdd": Phase="Pending", Reason="", readiness=false. Elapsed: 31.706776ms
Mar 27 22:08:23.237: INFO: Pod "pod-configmaps-8776202e-9b4e-4429-b2b8-1d10be679cdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061455852s
STEP: Saw pod success
Mar 27 22:08:23.237: INFO: Pod "pod-configmaps-8776202e-9b4e-4429-b2b8-1d10be679cdd" satisfied condition "Succeeded or Failed"
Mar 27 22:08:23.269: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-configmaps-8776202e-9b4e-4429-b2b8-1d10be679cdd container configmap-volume-test: <nil>
STEP: delete the pod
Mar 27 22:08:23.356: INFO: Waiting for pod pod-configmaps-8776202e-9b4e-4429-b2b8-1d10be679cdd to disappear
Mar 27 22:08:23.387: INFO: Pod pod-configmaps-8776202e-9b4e-4429-b2b8-1d10be679cdd no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 27 22:08:23.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8189" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":214,"skipped":3508,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 27 22:08:23.479: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
Mar 27 22:08:23.645: INFO: Waiting up to 5m0s for pod "pod-f0238614-20e0-4899-92d0-4a7511c15898" in namespace "emptydir-6416" to be "Succeeded or Failed"
Mar 27 22:08:23.679: INFO: Pod "pod-f0238614-20e0-4899-92d0-4a7511c15898": Phase="Pending", Reason="", readiness=false. Elapsed: 33.344608ms
Mar 27 22:08:25.709: INFO: Pod "pod-f0238614-20e0-4899-92d0-4a7511c15898": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063436971s
STEP: Saw pod success
Mar 27 22:08:25.709: INFO: Pod "pod-f0238614-20e0-4899-92d0-4a7511c15898" satisfied condition "Succeeded or Failed"
Mar 27 22:08:25.738: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-f0238614-20e0-4899-92d0-4a7511c15898 container test-container: <nil>
STEP: delete the pod
Mar 27 22:08:25.824: INFO: Waiting for pod pod-f0238614-20e0-4899-92d0-4a7511c15898 to disappear
Mar 27 22:08:25.854: INFO: Pod pod-f0238614-20e0-4899-92d0-4a7511c15898 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 27 22:08:25.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6416" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":215,"skipped":3516,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 26 lines ...
Mar 27 22:08:28.733: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 27 22:08:28.733: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 27 22:08:28.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2702" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":283,"completed":216,"skipped":3518,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-78f507d2-5fd7-4455-9abc-0fd87836922f
STEP: Creating a pod to test consume configMaps
Mar 27 22:08:29.021: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-44be66c1-8dca-4105-a8b1-09884e47d5dc" in namespace "projected-9742" to be "Succeeded or Failed"
Mar 27 22:08:29.051: INFO: Pod "pod-projected-configmaps-44be66c1-8dca-4105-a8b1-09884e47d5dc": Phase="Pending", Reason="", readiness=false. Elapsed: 30.338457ms
Mar 27 22:08:31.081: INFO: Pod "pod-projected-configmaps-44be66c1-8dca-4105-a8b1-09884e47d5dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060374353s
STEP: Saw pod success
Mar 27 22:08:31.081: INFO: Pod "pod-projected-configmaps-44be66c1-8dca-4105-a8b1-09884e47d5dc" satisfied condition "Succeeded or Failed"
Mar 27 22:08:31.111: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-projected-configmaps-44be66c1-8dca-4105-a8b1-09884e47d5dc container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 27 22:08:31.192: INFO: Waiting for pod pod-projected-configmaps-44be66c1-8dca-4105-a8b1-09884e47d5dc to disappear
Mar 27 22:08:31.222: INFO: Pod pod-projected-configmaps-44be66c1-8dca-4105-a8b1-09884e47d5dc no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 27 22:08:31.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9742" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":283,"completed":217,"skipped":3519,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 105 lines ...
<a href="btmp">btmp</a>
<a href="ch... (200; 31.636058ms)
[AfterEach] version v1
  test/e2e/framework/framework.go:175
Mar 27 22:08:32.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2999" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":283,"completed":218,"skipped":3528,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 43 lines ...
Mar 27 22:08:49.136: INFO: Pod "test-rollover-deployment-78df7bc796-jf9cv" is available:
&Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-jf9cv test-rollover-deployment-78df7bc796- deployment-2933 /api/v1/namespaces/deployment-2933/pods/test-rollover-deployment-78df7bc796-jf9cv 99c956e4-c6ab-4ba9-9ae7-f815b6f7f6f1 24731 0 2020-03-27 22:08:36 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:78df7bc796] map[cni.projectcalico.org/podIP:192.168.63.95/32] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 5f698929-f5d2-4ac4-91d0-aff83c4963d3 0xc001ebce17 0xc001ebce18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wc82d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wc82d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wc82d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 22:08:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 22:08:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 22:08:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 22:08:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:192.168.63.95,StartTime:2020-03-27 22:08:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-27 22:08:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://ca923bd7d86307193d9c3071f6be204df5b1a1f6e44504421e07744669174606,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.63.95,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 27 22:08:49.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2933" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":283,"completed":219,"skipped":3534,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 58 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 27 22:08:53.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5892" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":283,"completed":220,"skipped":3649,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 8 lines ...
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 27 22:09:00.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7389" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":283,"completed":221,"skipped":3674,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 27 22:09:00.635: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 27 22:09:00.797: INFO: Waiting up to 5m0s for pod "downward-api-364c6904-2f08-4563-ad0e-12e88a4e2e71" in namespace "downward-api-564" to be "Succeeded or Failed"
Mar 27 22:09:00.829: INFO: Pod "downward-api-364c6904-2f08-4563-ad0e-12e88a4e2e71": Phase="Pending", Reason="", readiness=false. Elapsed: 32.141636ms
Mar 27 22:09:02.859: INFO: Pod "downward-api-364c6904-2f08-4563-ad0e-12e88a4e2e71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062221763s
STEP: Saw pod success
Mar 27 22:09:02.859: INFO: Pod "downward-api-364c6904-2f08-4563-ad0e-12e88a4e2e71" satisfied condition "Succeeded or Failed"
Mar 27 22:09:02.888: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod downward-api-364c6904-2f08-4563-ad0e-12e88a4e2e71 container dapi-container: <nil>
STEP: delete the pod
Mar 27 22:09:02.966: INFO: Waiting for pod downward-api-364c6904-2f08-4563-ad0e-12e88a4e2e71 to disappear
Mar 27 22:09:02.996: INFO: Pod downward-api-364c6904-2f08-4563-ad0e-12e88a4e2e71 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 27 22:09:02.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-564" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":283,"completed":222,"skipped":3690,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-bf5781cc-7012-4dd4-855b-68b94ad26e0f
STEP: Creating a pod to test consume secrets
Mar 27 22:09:03.279: INFO: Waiting up to 5m0s for pod "pod-secrets-c3f7624a-7ac9-4542-9afa-185ee82f4a74" in namespace "secrets-4488" to be "Succeeded or Failed"
Mar 27 22:09:03.316: INFO: Pod "pod-secrets-c3f7624a-7ac9-4542-9afa-185ee82f4a74": Phase="Pending", Reason="", readiness=false. Elapsed: 37.448761ms
Mar 27 22:09:05.346: INFO: Pod "pod-secrets-c3f7624a-7ac9-4542-9afa-185ee82f4a74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.067089409s
STEP: Saw pod success
Mar 27 22:09:05.346: INFO: Pod "pod-secrets-c3f7624a-7ac9-4542-9afa-185ee82f4a74" satisfied condition "Succeeded or Failed"
Mar 27 22:09:05.377: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-secrets-c3f7624a-7ac9-4542-9afa-185ee82f4a74 container secret-volume-test: <nil>
STEP: delete the pod
Mar 27 22:09:05.450: INFO: Waiting for pod pod-secrets-c3f7624a-7ac9-4542-9afa-185ee82f4a74 to disappear
Mar 27 22:09:05.479: INFO: Pod pod-secrets-c3f7624a-7ac9-4542-9afa-185ee82f4a74 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 27 22:09:05.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4488" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":283,"completed":223,"skipped":3714,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Pods Extended
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  test/e2e/framework/framework.go:175
Mar 27 22:09:05.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4868" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":283,"completed":224,"skipped":3730,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Mar 27 22:09:05.832: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's args
Mar 27 22:09:05.997: INFO: Waiting up to 5m0s for pod "var-expansion-c6b71122-cf07-4203-9879-4400b7960385" in namespace "var-expansion-2558" to be "Succeeded or Failed"
Mar 27 22:09:06.032: INFO: Pod "var-expansion-c6b71122-cf07-4203-9879-4400b7960385": Phase="Pending", Reason="", readiness=false. Elapsed: 34.977793ms
Mar 27 22:09:08.063: INFO: Pod "var-expansion-c6b71122-cf07-4203-9879-4400b7960385": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065688746s
STEP: Saw pod success
Mar 27 22:09:08.063: INFO: Pod "var-expansion-c6b71122-cf07-4203-9879-4400b7960385" satisfied condition "Succeeded or Failed"
Mar 27 22:09:08.092: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod var-expansion-c6b71122-cf07-4203-9879-4400b7960385 container dapi-container: <nil>
STEP: delete the pod
Mar 27 22:09:08.183: INFO: Waiting for pod var-expansion-c6b71122-cf07-4203-9879-4400b7960385 to disappear
Mar 27 22:09:08.212: INFO: Pod var-expansion-c6b71122-cf07-4203-9879-4400b7960385 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 27 22:09:08.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2558" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":283,"completed":225,"skipped":3750,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 119 lines ...
Mar 27 22:10:01.879: INFO: ss-2  test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 22:09:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 22:09:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-27 22:09:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-27 22:09:29 +0000 UTC  }]
Mar 27 22:10:01.879: INFO: 
Mar 27 22:10:01.879: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7450
Mar 27 22:10:02.909: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:10:03.251: INFO: rc: 1
Mar 27 22:10:03.251: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Mar 27 22:10:13.252: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:10:13.476: INFO: rc: 1
Mar 27 22:10:13.476: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:10:23.476: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:10:23.709: INFO: rc: 1
Mar 27 22:10:23.709: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:10:33.709: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:10:33.941: INFO: rc: 1
Mar 27 22:10:33.941: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:10:43.941: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:10:44.164: INFO: rc: 1
Mar 27 22:10:44.164: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:10:54.165: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:10:54.390: INFO: rc: 1
Mar 27 22:10:54.390: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:11:04.390: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:11:04.618: INFO: rc: 1
Mar 27 22:11:04.618: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:11:14.619: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:11:14.842: INFO: rc: 1
Mar 27 22:11:14.842: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:11:24.842: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:11:25.071: INFO: rc: 1
Mar 27 22:11:25.072: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:11:35.072: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:11:35.307: INFO: rc: 1
Mar 27 22:11:35.307: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:11:45.307: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:11:45.531: INFO: rc: 1
Mar 27 22:11:45.531: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:11:55.532: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:11:55.758: INFO: rc: 1
Mar 27 22:11:55.758: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:12:05.758: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:12:05.989: INFO: rc: 1
Mar 27 22:12:05.989: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:12:15.989: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:12:16.219: INFO: rc: 1
Mar 27 22:12:16.219: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:12:26.220: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:12:26.451: INFO: rc: 1
Mar 27 22:12:26.451: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:12:36.451: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:12:36.675: INFO: rc: 1
Mar 27 22:12:36.675: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:12:46.675: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:12:46.906: INFO: rc: 1
Mar 27 22:12:46.906: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:12:56.907: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:12:57.128: INFO: rc: 1
Mar 27 22:12:57.128: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:13:07.129: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:13:07.356: INFO: rc: 1
Mar 27 22:13:07.356: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:13:17.356: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:13:17.583: INFO: rc: 1
Mar 27 22:13:17.583: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:13:27.583: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:13:27.818: INFO: rc: 1
Mar 27 22:13:27.818: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:13:37.819: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:13:38.055: INFO: rc: 1
Mar 27 22:13:38.055: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:13:48.055: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:13:48.281: INFO: rc: 1
Mar 27 22:13:48.281: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:13:58.281: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:13:58.513: INFO: rc: 1
Mar 27 22:13:58.513: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:14:08.514: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:14:08.741: INFO: rc: 1
Mar 27 22:14:08.741: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:14:18.741: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:14:18.968: INFO: rc: 1
Mar 27 22:14:18.968: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:14:28.968: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:14:29.195: INFO: rc: 1
Mar 27 22:14:29.195: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:14:39.196: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:14:39.420: INFO: rc: 1
Mar 27 22:14:39.420: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:14:49.420: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:14:49.657: INFO: rc: 1
Mar 27 22:14:49.657: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:14:59.658: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:14:59.885: INFO: rc: 1
Mar 27 22:14:59.885: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 27 22:15:09.885: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://35.244.159.35:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7450 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 27 22:15:10.105: INFO: rc: 1
Mar 27 22:15:10.105: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Mar 27 22:15:10.105: INFO: Scaling statefulset ss to 0
Mar 27 22:15:10.197: INFO: Waiting for statefulset status.replicas updated to 0
... skipping 13 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:592
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":283,"completed":226,"skipped":3753,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 27 22:15:26.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2538" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":283,"completed":227,"skipped":3763,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 27 22:15:27.213: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3f86106f-5705-423a-9b95-3d24dce5d5c3" in namespace "projected-5796" to be "Succeeded or Failed"
Mar 27 22:15:27.242: INFO: Pod "downwardapi-volume-3f86106f-5705-423a-9b95-3d24dce5d5c3": Phase="Pending", Reason="", readiness=false. Elapsed: 29.312622ms
Mar 27 22:15:29.272: INFO: Pod "downwardapi-volume-3f86106f-5705-423a-9b95-3d24dce5d5c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059284547s
STEP: Saw pod success
Mar 27 22:15:29.272: INFO: Pod "downwardapi-volume-3f86106f-5705-423a-9b95-3d24dce5d5c3" satisfied condition "Succeeded or Failed"
Mar 27 22:15:29.301: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod downwardapi-volume-3f86106f-5705-423a-9b95-3d24dce5d5c3 container client-container: <nil>
STEP: delete the pod
Mar 27 22:15:29.391: INFO: Waiting for pod downwardapi-volume-3f86106f-5705-423a-9b95-3d24dce5d5c3 to disappear
Mar 27 22:15:29.420: INFO: Pod downwardapi-volume-3f86106f-5705-423a-9b95-3d24dce5d5c3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 27 22:15:29.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5796" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":283,"completed":228,"skipped":3780,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
S
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 26 lines ...
Mar 27 22:15:34.109: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-kdmt8" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-kdmt8 test-rolling-update-deployment-664dd8fc7f- deployment-3997 /api/v1/namespaces/deployment-3997/pods/test-rolling-update-deployment-664dd8fc7f-kdmt8 e7d0f625-5320-4d51-8ccb-9b8dc260711d 26023 0 2020-03-27 22:15:31 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:664dd8fc7f] map[cni.projectcalico.org/podIP:192.168.0.43/32] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f f7522bc7-7fa8-48a8-a362-38b087fd1b88 0xc004f688d7 0xc004f688d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-94f4k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-94f4k,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-94f4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 22:15:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 22:15:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 22:15:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 22:15:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.6,PodIP:192.168.0.43,StartTime:2020-03-27 22:15:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-27 22:15:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://4fa9a05f3e327b9b75a88a79f6a0634aff65e04eabd4cf5ca445c1a0af32b8d8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 27 22:15:34.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3997" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":283,"completed":229,"skipped":3781,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 27 22:15:34.365: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b3869dd-d403-42ef-ac74-3f48661dd820" in namespace "downward-api-2667" to be "Succeeded or Failed"
Mar 27 22:15:34.392: INFO: Pod "downwardapi-volume-6b3869dd-d403-42ef-ac74-3f48661dd820": Phase="Pending", Reason="", readiness=false. Elapsed: 27.719894ms
Mar 27 22:15:36.424: INFO: Pod "downwardapi-volume-6b3869dd-d403-42ef-ac74-3f48661dd820": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059089618s
STEP: Saw pod success
Mar 27 22:15:36.424: INFO: Pod "downwardapi-volume-6b3869dd-d403-42ef-ac74-3f48661dd820" satisfied condition "Succeeded or Failed"
Mar 27 22:15:36.453: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod downwardapi-volume-6b3869dd-d403-42ef-ac74-3f48661dd820 container client-container: <nil>
STEP: delete the pod
Mar 27 22:15:36.535: INFO: Waiting for pod downwardapi-volume-6b3869dd-d403-42ef-ac74-3f48661dd820 to disappear
Mar 27 22:15:36.566: INFO: Pod downwardapi-volume-6b3869dd-d403-42ef-ac74-3f48661dd820 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 27 22:15:36.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2667" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":230,"skipped":3821,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 11 lines ...
Mar 27 22:15:38.930: INFO: Trying to dial the pod
Mar 27 22:15:44.024: INFO: Controller my-hostname-basic-1dbc8030-ff42-49cf-a5e8-55273a925cdd: Got expected result from replica 1 [my-hostname-basic-1dbc8030-ff42-49cf-a5e8-55273a925cdd-x98q9]: "my-hostname-basic-1dbc8030-ff42-49cf-a5e8-55273a925cdd-x98q9", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 27 22:15:44.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6022" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":283,"completed":231,"skipped":3821,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 27 22:15:44.115: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
Mar 27 22:15:44.242: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 27 22:15:46.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3333" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":283,"completed":232,"skipped":3843,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 27 22:15:51.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-778" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":283,"completed":233,"skipped":3895,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 11 lines ...
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 27 22:16:13.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4719" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":283,"completed":234,"skipped":3900,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-07336b57-51c3-4a68-b1b4-37d05a091c63
STEP: Creating a pod to test consume configMaps
Mar 27 22:16:13.540: INFO: Waiting up to 5m0s for pod "pod-configmaps-19afd9e9-2683-4d8d-84d1-bad673693fe9" in namespace "configmap-1455" to be "Succeeded or Failed"
Mar 27 22:16:13.569: INFO: Pod "pod-configmaps-19afd9e9-2683-4d8d-84d1-bad673693fe9": Phase="Pending", Reason="", readiness=false. Elapsed: 28.32959ms
Mar 27 22:16:15.598: INFO: Pod "pod-configmaps-19afd9e9-2683-4d8d-84d1-bad673693fe9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.057977043s
STEP: Saw pod success
Mar 27 22:16:15.598: INFO: Pod "pod-configmaps-19afd9e9-2683-4d8d-84d1-bad673693fe9" satisfied condition "Succeeded or Failed"
Mar 27 22:16:15.627: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-configmaps-19afd9e9-2683-4d8d-84d1-bad673693fe9 container configmap-volume-test: <nil>
STEP: delete the pod
Mar 27 22:16:15.710: INFO: Waiting for pod pod-configmaps-19afd9e9-2683-4d8d-84d1-bad673693fe9 to disappear
Mar 27 22:16:15.741: INFO: Pod pod-configmaps-19afd9e9-2683-4d8d-84d1-bad673693fe9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 27 22:16:15.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1455" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":283,"completed":235,"skipped":3908,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Mar 27 22:17:02.759: INFO: Restart count of pod container-probe-1101/busybox-1103619d-8f99-464b-8a70-444074661124 is now 1 (44.682447653s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 27 22:17:02.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1101" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":283,"completed":236,"skipped":3926,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Mar 27 22:17:03.028: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 27 22:17:06.280: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 27 22:17:20.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5943" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":283,"completed":237,"skipped":3953,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 25 lines ...
Mar 27 22:17:22.650: INFO: Pod "test-recreate-deployment-5f94c574ff-4xjvx" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-4xjvx test-recreate-deployment-5f94c574ff- deployment-9364 /api/v1/namespaces/deployment-9364/pods/test-recreate-deployment-5f94c574ff-4xjvx 516c5a74-3486-467f-90f0-a4391da928fb 26614 0 2020-03-27 22:17:22 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 6d0cdb27-4fe8-41f9-ac96-fedef3fd400c 0xc004145457 0xc004145458}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f9bt9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f9bt9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f9bt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 22:17:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 22:17:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 22:17:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-27 22:17:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.4,PodIP:,StartTime:2020-03-27 22:17:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 27 22:17:22.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9364" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":283,"completed":238,"skipped":3964,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
Mar 27 22:17:25.661: INFO: Successfully updated pod "annotationupdate98bb2ded-65fe-43d6-a677-4ee49c1393db"
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 27 22:17:29.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-884" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":283,"completed":239,"skipped":3966,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 27 lines ...
  test/e2e/framework/framework.go:175
Mar 27 22:17:44.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8036" for this suite.
STEP: Destroying namespace "webhook-8036-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":283,"completed":240,"skipped":4029,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-b6f1053c-824a-4f7b-a830-5999e0c6aa6c
STEP: Creating a pod to test consume configMaps
Mar 27 22:17:44.940: INFO: Waiting up to 5m0s for pod "pod-configmaps-0f9e77b8-eff6-4420-845c-f3e1ed146819" in namespace "configmap-4171" to be "Succeeded or Failed"
Mar 27 22:17:44.981: INFO: Pod "pod-configmaps-0f9e77b8-eff6-4420-845c-f3e1ed146819": Phase="Pending", Reason="", readiness=false. Elapsed: 40.614129ms
Mar 27 22:17:47.010: INFO: Pod "pod-configmaps-0f9e77b8-eff6-4420-845c-f3e1ed146819": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.070068074s
STEP: Saw pod success
Mar 27 22:17:47.010: INFO: Pod "pod-configmaps-0f9e77b8-eff6-4420-845c-f3e1ed146819" satisfied condition "Succeeded or Failed"
Mar 27 22:17:47.040: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-configmaps-0f9e77b8-eff6-4420-845c-f3e1ed146819 container configmap-volume-test: <nil>
STEP: delete the pod
Mar 27 22:17:47.114: INFO: Waiting for pod pod-configmaps-0f9e77b8-eff6-4420-845c-f3e1ed146819 to disappear
Mar 27 22:17:47.144: INFO: Pod pod-configmaps-0f9e77b8-eff6-4420-845c-f3e1ed146819 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 27 22:17:47.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4171" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":241,"skipped":4066,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 27 22:17:47.237: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar 27 22:17:47.397: INFO: Waiting up to 5m0s for pod "pod-f25ea794-3475-4e5a-90fa-860cef9bfe4b" in namespace "emptydir-9379" to be "Succeeded or Failed"
Mar 27 22:17:47.430: INFO: Pod "pod-f25ea794-3475-4e5a-90fa-860cef9bfe4b": Phase="Pending", Reason="", readiness=false. Elapsed: 33.505689ms
Mar 27 22:17:49.460: INFO: Pod "pod-f25ea794-3475-4e5a-90fa-860cef9bfe4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0632752s
STEP: Saw pod success
Mar 27 22:17:49.460: INFO: Pod "pod-f25ea794-3475-4e5a-90fa-860cef9bfe4b" satisfied condition "Succeeded or Failed"
Mar 27 22:17:49.488: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod pod-f25ea794-3475-4e5a-90fa-860cef9bfe4b container test-container: <nil>
STEP: delete the pod
Mar 27 22:17:49.573: INFO: Waiting for pod pod-f25ea794-3475-4e5a-90fa-860cef9bfe4b to disappear
Mar 27 22:17:49.606: INFO: Pod pod-f25ea794-3475-4e5a-90fa-860cef9bfe4b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 27 22:17:49.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9379" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":242,"skipped":4083,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 27 22:17:49.705: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 27 22:17:49.885: INFO: Waiting up to 5m0s for pod "pod-e2bd720e-c194-40f5-8108-dcc297d4b6f5" in namespace "emptydir-2552" to be "Succeeded or Failed"
Mar 27 22:17:49.918: INFO: Pod "pod-e2bd720e-c194-40f5-8108-dcc297d4b6f5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.169759ms
Mar 27 22:17:51.948: INFO: Pod "pod-e2bd720e-c194-40f5-8108-dcc297d4b6f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062044078s
STEP: Saw pod success
Mar 27 22:17:51.948: INFO: Pod "pod-e2bd720e-c194-40f5-8108-dcc297d4b6f5" satisfied condition "Succeeded or Failed"
Mar 27 22:17:51.977: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-e2bd720e-c194-40f5-8108-dcc297d4b6f5 container test-container: <nil>
STEP: delete the pod
Mar 27 22:17:52.061: INFO: Waiting for pod pod-e2bd720e-c194-40f5-8108-dcc297d4b6f5 to disappear
Mar 27 22:17:52.093: INFO: Pod pod-e2bd720e-c194-40f5-8108-dcc297d4b6f5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 27 22:17:52.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2552" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":243,"skipped":4084,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Mar 27 22:17:52.326: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 27 22:17:55.602: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 27 22:18:08.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4442" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":283,"completed":244,"skipped":4144,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 27 22:18:08.635: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-d77929a9-7b90-46ce-85b1-edf055cefe3c" in namespace "security-context-test-7536" to be "Succeeded or Failed"
Mar 27 22:18:08.667: INFO: Pod "busybox-readonly-false-d77929a9-7b90-46ce-85b1-edf055cefe3c": Phase="Pending", Reason="", readiness=false. Elapsed: 31.902156ms
Mar 27 22:18:10.697: INFO: Pod "busybox-readonly-false-d77929a9-7b90-46ce-85b1-edf055cefe3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06125058s
Mar 27 22:18:10.697: INFO: Pod "busybox-readonly-false-d77929a9-7b90-46ce-85b1-edf055cefe3c" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 27 22:18:10.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7536" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":283,"completed":245,"skipped":4157,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 27 22:18:22.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8715" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":283,"completed":246,"skipped":4186,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 27 22:18:22.225: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 27 22:18:22.393: INFO: Waiting up to 5m0s for pod "downward-api-6aa924d8-a499-4a91-95a9-f36a7b095ae8" in namespace "downward-api-8830" to be "Succeeded or Failed"
Mar 27 22:18:22.426: INFO: Pod "downward-api-6aa924d8-a499-4a91-95a9-f36a7b095ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 32.810791ms
Mar 27 22:18:24.456: INFO: Pod "downward-api-6aa924d8-a499-4a91-95a9-f36a7b095ae8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062669769s
STEP: Saw pod success
Mar 27 22:18:24.456: INFO: Pod "downward-api-6aa924d8-a499-4a91-95a9-f36a7b095ae8" satisfied condition "Succeeded or Failed"
Mar 27 22:18:24.486: INFO: Trying to get logs from node test1-md-0-zjwjt.c.k8s-jkns-gci-gce-1-3.internal pod downward-api-6aa924d8-a499-4a91-95a9-f36a7b095ae8 container dapi-container: <nil>
STEP: delete the pod
Mar 27 22:18:24.568: INFO: Waiting for pod downward-api-6aa924d8-a499-4a91-95a9-f36a7b095ae8 to disappear
Mar 27 22:18:24.599: INFO: Pod downward-api-6aa924d8-a499-4a91-95a9-f36a7b095ae8 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 27 22:18:24.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8830" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":283,"completed":247,"skipped":4196,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
... skipping 11 lines ...
Mar 27 22:18:26.960: INFO: Trying to dial the pod
Mar 27 22:18:32.051: INFO: Controller my-hostname-basic-87e3d546-ca2b-4df7-9bbe-23c4646a3a95: Got expected result from replica 1 [my-hostname-basic-87e3d546-ca2b-4df7-9bbe-23c4646a3a95-4md9j]: "my-hostname-basic-87e3d546-ca2b-4df7-9bbe-23c4646a3a95-4md9j", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:175
Mar 27 22:18:32.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7524" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":283,"completed":248,"skipped":4210,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 27 22:18:32.305: INFO: Waiting up to 5m0s for pod "busybox-user-65534-b0ddbaff-3eea-4d02-8f36-66059a2b0753" in namespace "security-context-test-9802" to be "Succeeded or Failed"
Mar 27 22:18:32.336: INFO: Pod "busybox-user-65534-b0ddbaff-3eea-4d02-8f36-66059a2b0753": Phase="Pending", Reason="", readiness=false. Elapsed: 30.43223ms
Mar 27 22:18:34.365: INFO: Pod "busybox-user-65534-b0ddbaff-3eea-4d02-8f36-66059a2b0753": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059609721s
Mar 27 22:18:34.365: INFO: Pod "busybox-user-65534-b0ddbaff-3eea-4d02-8f36-66059a2b0753" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 27 22:18:34.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9802" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":249,"skipped":4238,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
... skipping 7 lines ...
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  test/e2e/framework/framework.go:175
Mar 27 22:18:34.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-1819" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":283,"completed":250,"skipped":4238,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 27 22:18:37.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7269" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":283,"completed":251,"skipped":4242,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 19 lines ...
Mar 27 22:18:40.770: INFO: Pod "adopt-release-fcsnc": Phase="Running", Reason="", readiness=true. Elapsed: 33.257318ms
Mar 27 22:18:40.770: INFO: Pod "adopt-release-fcsnc" satisfied condition "released"
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Mar 27 22:18:40.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4615" for this suite.
•{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":283,"completed":252,"skipped":4275,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 67 lines ...
Mar 27 22:18:56.711: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7151/pods","resourceVersion":"27405"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 27 22:18:56.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7151" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":283,"completed":253,"skipped":4286,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 27 22:18:57.062: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0310d819-e14a-417c-8d3d-34b2887bd74e" in namespace "downward-api-5515" to be "Succeeded or Failed"
Mar 27 22:18:57.095: INFO: Pod "downwardapi-volume-0310d819-e14a-417c-8d3d-34b2887bd74e": Phase="Pending", Reason="", readiness=false. Elapsed: 33.233542ms
Mar 27 22:18:59.125: INFO: Pod "downwardapi-volume-0310d819-e14a-417c-8d3d-34b2887bd74e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062851794s
STEP: Saw pod success
Mar 27 22:18:59.125: INFO: Pod "downwardapi-volume-0310d819-e14a-417c-8d3d-34b2887bd74e" satisfied condition "Succeeded or Failed"
Mar 27 22:18:59.154: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod downwardapi-volume-0310d819-e14a-417c-8d3d-34b2887bd74e container client-container: <nil>
STEP: delete the pod
Mar 27 22:18:59.228: INFO: Waiting for pod downwardapi-volume-0310d819-e14a-417c-8d3d-34b2887bd74e to disappear
Mar 27 22:18:59.269: INFO: Pod downwardapi-volume-0310d819-e14a-417c-8d3d-34b2887bd74e no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 27 22:18:59.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5515" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":283,"completed":254,"skipped":4293,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Mar 27 22:18:59.483: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 27 22:19:02.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7910" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":283,"completed":255,"skipped":4296,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Mar 27 22:19:05.290: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 27 22:19:05.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8548" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":283,"completed":256,"skipped":4299,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 27 22:19:45.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0327 22:19:45.808479   24935 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
STEP: Destroying namespace "gc-9474" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":283,"completed":257,"skipped":4307,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 11 lines ...
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 27 22:19:46.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8404" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":283,"completed":258,"skipped":4325,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 29 lines ...
Mar 27 22:20:06.727: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar 27 22:20:06.757: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 27 22:20:06.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5591" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":283,"completed":259,"skipped":4369,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 27 22:20:07.014: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d09e05a2-345f-44b7-ab6b-cbd12a3a5242" in namespace "downward-api-7308" to be "Succeeded or Failed"
Mar 27 22:20:07.043: INFO: Pod "downwardapi-volume-d09e05a2-345f-44b7-ab6b-cbd12a3a5242": Phase="Pending", Reason="", readiness=false. Elapsed: 29.779173ms
Mar 27 22:20:09.073: INFO: Pod "downwardapi-volume-d09e05a2-345f-44b7-ab6b-cbd12a3a5242": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059275734s
STEP: Saw pod success
Mar 27 22:20:09.073: INFO: Pod "downwardapi-volume-d09e05a2-345f-44b7-ab6b-cbd12a3a5242" satisfied condition "Succeeded or Failed"
Mar 27 22:20:09.103: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod downwardapi-volume-d09e05a2-345f-44b7-ab6b-cbd12a3a5242 container client-container: <nil>
STEP: delete the pod
Mar 27 22:20:09.180: INFO: Waiting for pod downwardapi-volume-d09e05a2-345f-44b7-ab6b-cbd12a3a5242 to disappear
Mar 27 22:20:09.210: INFO: Pod downwardapi-volume-d09e05a2-345f-44b7-ab6b-cbd12a3a5242 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 27 22:20:09.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7308" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":283,"completed":260,"skipped":4381,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 13 lines ...
Mar 27 22:20:09.640: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-4018 /api/v1/namespaces/watch-4018/configmaps/e2e-watch-test-resource-version 8e8b664d-b001-44da-8eac-34e4301cd5db 28036 0 2020-03-27 22:20:09 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 27 22:20:09.640: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-4018 /api/v1/namespaces/watch-4018/configmaps/e2e-watch-test-resource-version 8e8b664d-b001-44da-8eac-34e4301cd5db 28037 0 2020-03-27 22:20:09 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 27 22:20:09.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4018" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":283,"completed":261,"skipped":4421,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-2be32168-f83b-4d60-8211-b6df27d2319e
STEP: Creating a pod to test consume secrets
Mar 27 22:20:09.898: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cb93a733-e2d7-4ed2-9a6c-06106e59e264" in namespace "projected-1023" to be "Succeeded or Failed"
Mar 27 22:20:09.928: INFO: Pod "pod-projected-secrets-cb93a733-e2d7-4ed2-9a6c-06106e59e264": Phase="Pending", Reason="", readiness=false. Elapsed: 29.627998ms
Mar 27 22:20:11.958: INFO: Pod "pod-projected-secrets-cb93a733-e2d7-4ed2-9a6c-06106e59e264": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059436832s
STEP: Saw pod success
Mar 27 22:20:11.958: INFO: Pod "pod-projected-secrets-cb93a733-e2d7-4ed2-9a6c-06106e59e264" satisfied condition "Succeeded or Failed"
Mar 27 22:20:11.987: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod pod-projected-secrets-cb93a733-e2d7-4ed2-9a6c-06106e59e264 container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 27 22:20:12.069: INFO: Waiting for pod pod-projected-secrets-cb93a733-e2d7-4ed2-9a6c-06106e59e264 to disappear
Mar 27 22:20:12.108: INFO: Pod pod-projected-secrets-cb93a733-e2d7-4ed2-9a6c-06106e59e264 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 27 22:20:12.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1023" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":262,"skipped":4421,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 27 22:20:12.330: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 27 22:20:18.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1380" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":283,"completed":263,"skipped":4443,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Mar 27 22:20:18.534: INFO: stderr: ""
Mar 27 22:20:18.534: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.1.73+5317a3160c5336\", GitCommit:\"5317a3160c53366fa84db7536e1ae110ecc69eb9\", GitTreeState:\"clean\", BuildDate:\"2020-02-11T14:24:02Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"16\", GitVersion:\"v1.16.1\", GitCommit:\"d647ddbd755faf07169599a625faf302ffc34458\", GitTreeState:\"clean\", BuildDate:\"2019-10-02T16:51:36Z\", GoVersion:\"go1.12.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 27 22:20:18.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1025" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":283,"completed":264,"skipped":4468,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 17 lines ...
Mar 27 22:20:30.926: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 27 22:20:32.926: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 27 22:20:34.926: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 27 22:20:36.925: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 27 22:20:36.984: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
{"component":"entrypoint","file":"prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","time":"2020-03-27T22:20:37Z"}
Mar 27 22:20:39.165: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.6:8080/dial?request=hostname&protocol=udp&host=192.168.63.79&port=8081&tries=1'] Namespace:pod-network-test-6813 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 27 22:20:39.165: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 27 22:20:39.427: INFO: Waiting for responses: map[]
Mar 27 22:20:39.457: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.6:8080/dial?request=hostname&protocol=udp&host=192.168.0.5&port=8081&tries=1'] Namespace:pod-network-test-6813 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 27 22:20:39.457: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 27 22:20:39.715: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 27 22:20:39.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6813" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":283,"completed":265,"skipped":4474,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 27 22:20:44.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4727" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":283,"completed":266,"skipped":4494,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 27 22:20:45.004: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d38d2c8f-20f9-4ccf-a365-766e67e36dac" in namespace "downward-api-6681" to be "Succeeded or Failed"
Mar 27 22:20:45.033: INFO: Pod "downwardapi-volume-d38d2c8f-20f9-4ccf-a365-766e67e36dac": Phase="Pending", Reason="", readiness=false. Elapsed: 28.910725ms
Mar 27 22:20:47.062: INFO: Pod "downwardapi-volume-d38d2c8f-20f9-4ccf-a365-766e67e36dac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.05850955s
STEP: Saw pod success
Mar 27 22:20:47.062: INFO: Pod "downwardapi-volume-d38d2c8f-20f9-4ccf-a365-766e67e36dac" satisfied condition "Succeeded or Failed"
Mar 27 22:20:47.091: INFO: Trying to get logs from node test1-md-0-55jsz.c.k8s-jkns-gci-gce-1-3.internal pod downwardapi-volume-d38d2c8f-20f9-4ccf-a365-766e67e36dac container client-container: <nil>
STEP: delete the pod
Mar 27 22:20:47.168: INFO: Waiting for pod downwardapi-volume-d38d2c8f-20f9-4ccf-a365-766e67e36dac to disappear
Mar 27 22:20:47.197: INFO: Pod downwardapi-volume-d38d2c8f-20f9-4ccf-a365-766e67e36dac no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 27 22:20:47.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6681" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":283,"completed":267,"skipped":4504,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 40 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 27 22:20:52.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3418" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":283,"completed":268,"skipped":4512,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 2 lines ...
Mar 27 22:20:52.142: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
{"component":"entrypoint","file":"prow/entrypoint/run.go:245","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","time":"2020-03-27T22:20:52Z"}